List

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World

The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy

  • How Redditors Exposed The Stock Market | The Problem With Jon Stewart - has a nice diagram(s) (and overall good explanation of what was happening).
  • Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World.
  • Do only what only you can do - Dijkstra
  • Their book is frequentist book
  • Cornell is a monopoly

    Also Wolfram. And most important of all is "The Unfinished Game" by Keith Devlin.

    Fri Dec 13. 2024.

    We are at the beginning of new technology era and there would be those, who can leverage the new age tools and those, who can not. For some people, web browser is as hard as crypto. Billions of people live on this planet.

    Fri Dec 14. 2024.

    Regressions are possible for AI. Via cascading. Three Laws of Robotics. 1942.

    Fri Dec 28. 2024.

    Opened up new dimensions in AI. Borland all the way.

    Thu Jan 09. 2025.

    LA burning, talks about Canada becoming 51-st state. NN is a corner case of Von Newmann. It's ironic how people try to judge architecture by the size of the building. For centuries they try.

    Fri Jan 10. 2025.

    Surprisingly figured out how to blend Monades with AI. So it is possible to blend them with Rust *and* (differently) with AI. It's universal. Should not be surprising. The trick is from one of the smartest dudes of 20-th century. He shared it openly before the war. Openly. He knew.

    Sun Jan 12. 2025.

    Moved a bit from conjecture to defensible proof. Yes, the Monades. Got the math to the level that is verifiably superior to at least one of the major players. If not *the* major player. They think they can compete with several centuries of Math? Of course they can not.

    Mon Jan 13. 2025.

    Monades > Markov | Fourier
    Semiotics > ZK

    Tue Jan 14. 2025.

    "I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times" -- This famous quote is widely attributed to Bruce Lee and reflects his philosophy on mastery and focus. However, there is no definitive evidence that he wrote or said these exact words.

    Tue Jan 14. 2025.

    Started v 2.0 on end of 2024. Feels *great*.

    Fri Jan 17. 2025.

    2.0 works.
    Looks like the best pragmatic way to stabilize inference is via Semiotics/VM. Approaching LLMs as VMs. Possible.

    Margin Call: 2011
    War in Syria: 2011

    Sat Jan 18. 2025.

    Approaching AI like supply chain transform looks solid.

    They just switch from GRL to GL. That's all they do. The donkey will die, or the king will die. It's elementary school level.

    Fri Jan 24. 2025.

    Inference is *not* stable.

    1968 and then 1988. The cycle is easy to see. 2008 and 2028.

    Sat Jan 25. 2025.

    YAML / HJSON / HOCON. Still no good. Had to invent new notation. BNF * Semiotics - worked.

    Sun Jan 26. 2025.

    NY blockades timings were (and *are*) 5 years. Effectively. Paris was 10 years.

    Mon Jan 27. 2025.

    DeepSeek. Stampede. Their on-boarding is not linear. They have better QA. The race is on. Looks like indexing, though. Yes it is indexing. And they all have problem(s). This one has problems with sharding and uptime.

    Tue Jan 28. 2025.

    Completed Vlads notation. 2003 (PerlP). Trick with ':' derived from XSLScript (2001) Second step - 2104 (Pfs) '>-' Already Monades, but a corner case. Today - '.' resolves the thingy. Verified with AI. Boost of a few decades into few months. Good luck trying to contain this.

    Sat Feb 01. 2025.

    MR works on AI (depending on domain). Distilled or not does not matter much. That was supposed to get the race going. And it did. Just moving towards Vlad's notation. "No big deal" as he would have said.

    Sun Feb 02. 2025.

    Fibbonacci > Hooke's law.

    Tue Feb 04. 2025.

    The way MR works depends on domain big time. Some things it makes better some things it makes critically worse.

    So I had a solid defence that I built for a few months (if not years) and it was holding up against everything. I just put it down today because it's no use against AI. Feels very strange to put down a *perfect* defence. AI will change everything. I think.

    Tue Feb 05. 2025.

    This AI thing basically fixed the internet. It is now of use again. Only I use it via my own filters. And I think *very* few people do it like that currently.

    Indexing Consistency looks feasible on AI. That will change *a*lot*. Why are they not talking about that? They're scared to even talk about it, I think. Also, the big names are beginning clearly to lose the race.

    Thu Feb 06. 2025.

    Semiotics > C++

    Blend of Rumba with MX Judo.

    Sun Feb 09. 2025.

    Figured out the book to write. "Hidden treasure of Vlad's notation". I could be writing it for decades now and he does not care. Semiotics v 3.0, indeed.

    Mon Feb 10. 2025.

    Altman said the prices on AI decrease 10x a year. Yeah, it was a bubble. Big one. They have no apps even. (I do have)

    Wed Feb 12. 2025.

    The industry benchmark is out and I beat them. Not surprisingly, really.

    Fri Feb 14. 2025.

    Brooks' design worked today. On a complex case. On 4 actors exactly. Lead-copilot-librarian-tester.

    Mon Feb 17. 2025.

    AI is like Web 1.0. Kind of hard to understand how the life was before it. So AI is basically Web 4.0. Not bad (it's no salvation of course). Stuff keeps working, I use it every day now, but not the way I used to.

    Tue Feb 18. 2025.

    Writing stuff before AI is like doing everything without a hammer. Possible, sure. There will be generational problems though.

    Wed Feb 19. 2025.

    The best tool in class is gema (since 1970s). I did better, so now I need to think a bit about this all. I would not have found gema without my AI. Hmmm ... Might be able to blend, like I did with Pytago. Basically, the same pp transfrom on another domain. Looks doable. Core looks ok. Layers are no use. Memcached was the same.

    Added Grok. Which is fast and overall might be ok.

    Thu Feb 20. 2025.

    Grok is ok.

    Fri Feb 21. 2025.

    The quote "Things are always at their best in their beginning" is from Blaise Pascal, a French mathematician, physicist, and philosopher. It appears in his work "Pensees" (Thoughts) which was published posthumously in 1670.

    Grok doesn't care about English. Mistral does not care about sources. Clearly those engines are now beginning to fluctuate apart from each other. That clearly brings indexing back etc. Ouroboros over and over and over.

    My AI engine just found my code produced 25 years ago. In a place I would never guess. Attributed to god knows whom.

    In the past, when you could do X 10-100x better than anybody (which is rare event), you just do X, of course. With AI it is much more complex. Harder to find a perfect shot. Because there is now too many.

    Hooke's law + reflection. Turns out. k1/k2.

    Sat Feb 22. 2025.

    Escalation of Rumba / MX Judo. 2 years of covid. 3 years of war. 2028 is the end of the cycle. Holds for a month.

    Found *major* loophole in most important AI models. This race is far from over, turns out. What are they all doing with all those billions? Gambling, I guess.

    Well, of course their benchmarks are all fake now but they did not started it. They just destroy what was already kind of openly broken. Only this is Car in Harlem thingy. Things are not as broken as they say they are. The usual.

    It all does not matter. The real thing is that education was suffocating people globally and AI has detonated it from inside, like Web 1.0 did. The blend of crypto and AI is one major force. Like Web 1.0 used to be. This is Web 4.0, clearly. Every day I find awesome stuff. It is all out there in the open! How did Google and others managed to *not* find it? Yeah.

    Sun Feb 23. 2025.

    Now that I look at this all, I actually *did* seq2seq when dealing with XSLT. Only I did not know it at that time and I called it "chunks". Because if you try to simplify XSLT transforms, you will end up with that model one way or another. Do you want it or not. Chunks had paths, though. It's the same puzzle of Vald's notaion over and over again. Like it was for him. Because the weights is a distraction. N is good enough. Since 1930s it is.

    *Major* blow to key AI models today. Check-n-mate. The race is over - looks like. Explains the insanity in some places.

    As per AI each and every scripting engine implemented by this point is suboptimal. And AI is right! There is one elegant (hardware friendly) transform that should have been employed by engine writers, but they could not handle complexity! Well, a lots of systems would collapse looks like. And what happens in some parts of the government is *very* important now. Naturally.

    Last attempts at sanity were 2005. I think that was the year they decided that sanity is not going to work, so printing money would be a simpler way out. I think they were surprised it lasted 20 years even.

    Mon Feb 24. 2025.

    Removed one and only W machine. Was thinking about the implications for several months. Still - getting rid of TV was (much) harder. Same $15/month. No more wintertales for me (as they say in some parts of EU, turns out).

    Tue Feb 25. 2025.

    Revisited few pages. This time with AI. AI helps a lot. And most of the time it points the right way. It only helps if you already know 90% of the answer, of course. Maybe 70%, though.

    Another day, another AI benchmark. Turns out, I already seen this very kind of diversion - many years ago. Seen it here, in the valley. Very effective trick.

    One month from conjecture to proof. On LLM/VM. It's a (classic) linear programming thing, basically.

    Wed Feb 26. 2025.

    Windsurf Blows VS Code out of the water, like Borland IDE destroyed MS VC back in a day.

    linkedin

    Thu Feb 27. 2025.

    AI found plot juggler. plot juggler is superb. UI puzzles.

    Fri Feb 28. 2025.

    Pushed Vlad's notation all the way to VDL. Thank's Nicklaus. Thank's Edgar. This is going to be interesting. Nothing comes close. Master Algo book explains why it is like that. With details and names. In plain English.

    Sun Mar 02. 2025.

    Tried DALLE. Yes, today. I am focused on coding.

    Calibration of NN - check.

    Technically - today is first day I could have ditched the web browser. Thinking about this all.

    Figured out first agent-based pipeline end to end. Might do it without ADE first. I'm undecided on ADE. I am also undecided on transforms. Complex stuff. So now just need to implement hello world end to end and see. VDL looks inevitable the moment you step outside MR. It is really impressive the damage MR did to the world.

    Mon Mar 03. 2025.

    Found *major* fallacy in AI. It really does not see the differences in complexity, need to be human for that. Expert system approach still beats the brute force. Not surprisingly. The expert systems were done by (much) smarter generation. In my case the agent contradicted laws of robotics simply ignoring the reliable solution conducted to it verabtim and instead injecting garbage regexprs because in AI world regexpr engines have no bugs. Good to be AI.

    Tried Google Code Assist. No surprises.

    Tue Mar 04. 2025.

    I have one simple coding benchmark that all AI coding engines fail badly. Sage did not fail it.

    Sun Mar 09. 2025.

    Disgusting garbage escalated all over the world. Including here, of course. I will not be writing much now. Like one girl told me here, "the only consolation - those assholes will die soon". She said it 10 years ago but only now I understand she ment "sooner than us". Smart girl from the north. I think she might not be alive by this time. She was already in her last battle and that was even before the EU. There are two kind of key groups of people now. Some Powerfull group thinks that this will be forever. Some Italians *not* in that group and they are rather clear in their communication. For 10+ years now.

    Mon Mar 10. 2025.

    Figured out the second trick from Brooks (+von Neumann) to improve on AI (first was a brigade, second is tables/code). Clearly, the golden era of this civilization was 1970-80. Generation without world war (because WW2 -> Nukes and Nukes kept idiots in check for a little bit). Everything was created during those 10 short years. Basically, the Mythical Man Month book worked for me for several generations and still works on this bubble.

    Vlads monades. Brooks compute. Then goes storage. I did not see the fractal/monadic link between von Neumann and Brooks. I only see it because NVDA did what they did. Simple fractal trick. I talked to the guy, who did it decades ago. He did it here. That's how I know. So they are all second-handers on steroids. The Irony.

    Got 70% of the product designed.

    Very reliable factories producing very unreliable products only works for factory owners. No matter the goods. The Status Civilization. 1960. "Non drug-addiction" is still a crime. Yeah.

    Tue Mar 11. 2025.

    The ES wrapped as MCP is the next move.

    Von Neumann keeps giving. 1944-45. They had to send their work to *somebody* from EU. Most likely the dude was that "somebody".

    Forth keeps giving also. Very much applicable to LLMs. Looks like MCP or not does not matter. Like it did not matter for Forth, so should be OK.