I enjoy reading Peter’s ‘Python studies’ and was surprised to see here a comparison of different LLMs for solving advent of code problems, but the linked article is pretty cool.
Peter and a friend of his wrote an article over a year ago discussing whether or not LLMs are already AGI, and after re-reading that article my opinion was moved a bit to: LLMs are AGI in broad digital domains. I still need to see embodied AI in robots and physical devices before I think we are 100% of the way there. Still, I apply Gemini and also a lot of open weight models to both 1. coding problems and 2. after I read or watch material on Philosophy I almost always ask Gemini for a summary, references, and a short discussion based on what Gemini knows about me.
I'm sorry, but what's the point here ? It's not for a job or improve a LLM or doing something useful per se, just to "enjoy" how version X or Y of an LLM can solve problems.
I don't want to sound grumpy or but it doesn't achieve anything, this is just a showcase of how a "calculator with a small probability of failure can succeed".
Move on, do something useful, don't stop being amazed by AI but please stop throwing it at my face.
You are conflating "hype" with any positive outlook. It has some uses and some people are using it. That's not "hype". It is exhausting to see it everywhere so I sympathize.
Peter and a friend of his wrote an article over a year ago discussing whether or not LLMs are already AGI, and after re-reading that article my opinion was moved a bit to: LLMs are AGI in broad digital domains. I still need to see embodied AI in robots and physical devices before I think we are 100% of the way there. Still, I apply Gemini and also a lot of open weight models to both 1. coding problems and 2. after I read or watch material on Philosophy I almost always ask Gemini for a summary, references, and a short discussion based on what Gemini knows about me.
reply