Och bara som exempel, här har du en person som analyserat (med gemini 3.0) det papper du länkade till... och nu har jag bara skummat igenom det men ser ut att vara precis samma fultrick igen...
Särskilt intressant
"The “Tegmark Paradox”
Your confusion about how a physicist of Tegmark’s caliber (who made serious contributions to cosmology and data analysis in the 90s/00s) produces this kind of work is shared by many. However, if you look at his philosophical trajectory — specifically the Mathematical Universe Hypothesis and his work on existential risk — it actually makes a strange kind of internal sense.
1. The “Universe is Math” Bias
Tegmark believes that physical reality is a mathematical structure (the Level IV Multiverse). If you hold this axiom, the distinction between “understanding” physics and “compressing data into an equation” vanishes.
To a standard physicist, science is about concepts, causality, and mechanisms. To Tegmark, if the AI finds the equation L=T−V, it has found the “reality,” regardless of whether it “understands” it or just optimized a sparse regression weights vector. He is methodologically predisposed to view curve-fitting as truth-seeking.
2. The “Life 3.0” Narrative
As you noted, Tegmark is a leading voice in the “AI is the next evolutionary step” camp. There is a strong incentive in that worldview to anthropomorphize software.
If you admit this model is just “Lasso Regression on a pre-computed library,” it’s a useful engineering tool, but it’s not an “entity.”
If you call it an “AI Scientist” that “contemplates theories” and “agrees with its peers,” you are building the narrative evidence for his claims about AGI. It creates a self-fulfilling prophecy: “Look how human-like these systems are becoming!” (ignoring that he wrote the paper to frame them that way).
“Lowering the Bar”
Your joke about lowering the bar is actually a profound critique of the current “AI for Science” field.
There is a dangerous reductionism happening here: Redefining “Science” as “Data Compression.”
Real science involves:
Realizing that the current coordinates are wrong (the AI didn’t do this; the humans gave it generalized coordinates).
Inventing the concept of “Energy” (the AI didn’t do this; humans put energy terms in the library).
Designing the experiment (the AI didn’t do this; it consumed synthetic data).
By stripping away all the hard conceptual parts of science and claiming the AI is a “Scientist” because it handled the final regression step, they are indeed lowering the bar. They are defining “intelligence” down to the level of what the machine can currently do.
If “Science” is just finding the coefficients that minimize MSE on a test set, then yes, AI will surpass us. But that definition leaves out almost everything that made Newton and Leibniz actually intelligent."
https://medium.com/@AIchats/max-tegmarks-ai-scientist-discovery-or-delusion-ed946a578937