youre going offtopic
Im talking abt ai
yes you do
Most lime would knowā¦
AJF:LADF
The pals at Hack Club have REMOVEDDD the current AI model and REPLACED it with LLaMA 3.3!!!
NOW, the website is SO MUCH faster! Itās even faster than ChatGPT, all the while being the tiniest bit less accurate (in fact, the responses feel so much nicer and more like it understands what Iām saying!)
Responses are almost instant, too.
Live RN at hackclub-chatbot.vercel.app.
I already use this as my main chatbot now fyi, thereās only one thing left for me to implement for it to become my ONLY chatbot: cross-chat memory (coming soon!)
how is chatgpt one of the most popular ai, if its this easy to make one faster than it?
![]()
oh nono, my website is NOT a new AI I made myself, that would cost millions of dollars and months of work. This is an AI frontend, meaning it uses an AI API to send and receive messages, and it gives received messages to the user.
The AI Iām using is LLaMA 3.3 70b, which is an open-source AI made by Meta.
Oh so your taking a ai and using it for your website? and then changing the way it answers a lil bit?
Basically, yes. LLaMA 3.3 isnāt available online, though. Itās only available via api, and therefore since the API is paid, my lovely website could be considered the best way to access the model for free with no ads or microtransactions.
P.S. LLaMA is also better than ChatGPT in certain benchmarks. At this point the only three reasons why you should use ChatGPT over my site are
- because chatgpt has cross-chat memory or
- because you already have used chatgpt and donāt want to build-up all the memories/chats again with my site.
- search/image/audio/reasoning functionality
Cant say LLaMa or ChatGPT. Must say models.
3.3-70b in fact does not beat o3-mini for example, 3.3-70b also barely loses to gpt4o.
but again you cant always say better or worse, depends on use case. benchmarks are just benchmarks, basically data (mmlu, gsm8k, etc.)
I agree, reasoning models almost always beat out regular models.
For my use case, LLaMA is absolutely way more useful than GPT-4o. Speed is something I love and value so much, especially when the smarter option is just very marginally better.
3.3-70b is smarter in some things, again dont rely on benchmarks. Gpt4o will beat llama3.3-70b in some stuff. Llama3.3-70b will beat gpt4o in some stuff.
Reasoning models arent always better, they are really just a meme lol. Couldāve done better but it is what it is right now.
thatās what Iām saying ![]()
I prefer LLaMA because it has almost instant responses, and it is close enough in terms of intelligence to GPT-4o that it really doesnāt matter which is better in benchmarks.
This topic was automatically closed after 7 days. New replies are no longer allowed.