MASSIVE CHATBOT UPDATE (not really.)

youre going offtopic

Im talking abt ai

yes you do

Most lime would know…

AJF:LADF

The pals at Hack Club have REMOVEDDD the current AI model and REPLACED it with LLaMA 3.3!!!

NOW, the website is SO MUCH faster! It’s even faster than ChatGPT, all the while being the tiniest bit less accurate (in fact, the responses feel so much nicer and more like it understands what I’m saying!)

Responses are almost instant, too.

Live RN at hackclub-chatbot.vercel.app.

I already use this as my main chatbot now fyi, there’s only one thing left for me to implement for it to become my ONLY chatbot: cross-chat memory (coming soon!)

1 Like

how is chatgpt one of the most popular ai, if its this easy to make one faster than it? :sob: :pray:

oh nono, my website is NOT a new AI I made myself, that would cost millions of dollars and months of work. This is an AI frontend, meaning it uses an AI API to send and receive messages, and it gives received messages to the user.

The AI I’m using is LLaMA 3.3 70b, which is an open-source AI made by Meta.

1 Like

Oh so your taking a ai and using it for your website? and then changing the way it answers a lil bit?

Basically, yes. LLaMA 3.3 isn’t available online, though. It’s only available via api, and therefore since the API is paid, my lovely website could be considered the best way to access the model for free with no ads or microtransactions.

P.S. LLaMA is also better than ChatGPT in certain benchmarks. At this point the only three reasons why you should use ChatGPT over my site are

  1. because chatgpt has cross-chat memory or
  2. because you already have used chatgpt and don’t want to build-up all the memories/chats again with my site.
  3. search/image/audio/reasoning functionality

Cant say LLaMa or ChatGPT. Must say models.

3.3-70b in fact does not beat o3-mini for example, 3.3-70b also barely loses to gpt4o.

but again you cant always say better or worse, depends on use case. benchmarks are just benchmarks, basically data (mmlu, gsm8k, etc.)

I agree, reasoning models almost always beat out regular models.

For my use case, LLaMA is absolutely way more useful than GPT-4o. Speed is something I love and value so much, especially when the smarter option is just very marginally better.

3.3-70b is smarter in some things, again dont rely on benchmarks. Gpt4o will beat llama3.3-70b in some stuff. Llama3.3-70b will beat gpt4o in some stuff.

Reasoning models arent always better, they are really just a meme lol. Could’ve done better but it is what it is right now.

that’s what I’m saying :sob:

I prefer LLaMA because it has almost instant responses, and it is close enough in terms of intelligence to GPT-4o that it really doesn’t matter which is better in benchmarks.

1 Like

This topic was automatically closed after 7 days. New replies are no longer allowed.