Simon Willison's Weblog
That design was trained in part utilizing their unreleased R1 "reasoning" model. Today they have actually released R1 itself, along with an entire household of new designs obtained from that base.
There's a lot of stuff in the new release.
DeepSeek-R1-Zero seems the base model. It's over 650GB in size and, like most of their other releases, is under a clean MIT license. DeepSeek alert that "DeepSeek-R1-Zero comes across challenges such as unlimited repeating, bad readability, and language blending." ... so they also launched:
DeepSeek-R1-which "incorporates cold-start information before RL" and "attains efficiency comparable to OpenAI-o1 across math, code, and thinking tasks". That a person is likewise MIT certified, and is a similar size.
I don't have the ability to run models bigger than about 50GB (I have an M2 with 64GB of RAM), so neither of these 2 designs are something I can easily play with myself. That's where the brand-new distilled models are available in.
To support the research neighborhood, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and 6 dense models distilled from DeepSeek-R1 based on Llama and Qwen.
This is a fascinating flex! They have upon Qwen 2.5 (14B, 32B, Math 1.5 B and Math 7B) and Llama 3 (Llama-3.1 8B and Llama 3.3 70B Instruct).
Weirdly those Llama models have an MIT license attached, which I'm uncertain works with the underlying Llama license. Qwen models are Apache licensed so perhaps MIT is OK?
(I also simply observed the MIT license files state "Copyright (c) 2023 DeepSeek" so they may need to pay a little bit more attention to how they copied those in.)
Licensing aside, these distilled designs are fascinating beasts.
Running DeepSeek-R1-Distill-Llama-8B-GGUF
Quantized versions are currently starting to appear. Up until now I have actually tried just one of those- unsloth/DeepSeek-R 1-Distill-Llama-8B-GGUF launched by Unsloth AI-and it's really enjoyable to have fun with.
I'm running it using the combination of Ollama, LLM and the llm-ollama plugin.
First I fetched and ran the model using Ollama itself:
This downloads around 8.5 GB of design data and begins an interactive chat interface.
Once the model has been brought, LLM can talk with it too. I choose utilizing LLM for experiments because it logs everything to SQLite for later exploration.
I installed the plugin and ran it like this:
Each action begins with a ... pseudo-XML tag containing the chain of thought used to assist produce the response. For "a joke about a pelican and a walrus who run a tea space together" that began like this:
Okay, so I need to come up with a joke about a pelican and a walrus running a tea space together. Hmm, that's a fascinating combination. Let me consider how these 2 characters might engage in a humorous situation.
First, let's consider their qualities. Pelicans are known for their long beaks and Webbed feet, typically seen near the beach or water. Walruses have big teeth, thick fur, and bybio.co they're normally discovered in colder environments, like icebergs or forum.altaycoins.com snowy areas. So, combining these 2 into a tea room setting is already a funny image.
[...]
It then believed for 20 paragraphs before outputting the joke! Here's the complete transcript and here's the output at the end, after the tag:
The Joke:
A pelican and a walrus choose to open a quaint little tea room together. The walrus, with its big size, has a hard time to discover comfortable chairs, historydb.date so it rests on the table by accident, knocking over the teapot. Meanwhile, the pelican, attempting to assist, uses its beak to put saucers on the table, triggering a few spills.
After a series of funny incidents, the walrus looks up and states with a smile, "This isn't so fishy any longer." The pelican smirks and says, "Maybe not, but we do have a lot of krill in our tea!"
Explanation:
- Setup: Pelican and Walrus run an unlikely tea room. Their sizes cause comical mayhem.
- Punchline: "This isn't so fishy anymore" is a pun on the circumstance ending up being less ridiculous. The follow-up joke introduces "krill," a play on words that ties into both marine life ("fish") and tea, making it amusing while fitting the style.
So yeah, the joke is dreadful. But the procedure of arriving was such an intriguing insight into how these new designs work.
This was a fairly small 8B design. I'm anticipating checking out the Llama 70B variation, which isn't yet available in a GGUF I can keep up Ollama. Given the strength of Llama 3.3 70B-currently my preferred GPT-4 class design that I have actually run on my own machine-I have high expectations.
Update 21st January 2025: I got this quantized variation of that Llama 3.3 70B R1 distilled model working like this-a 34GB download:
Can it draw a pelican?
I attempted my timeless Generate an SVG of a pelican riding a bike prompt too. It did refrain from doing really well:
It aimed to me like it got the order of the components incorrect, so I followed up with:
the background ended up covering the remainder of the image
It thought some more and gave me this:
Just like the earlier joke, the chain of thought in the transcript was much more interesting than the end outcome.
Other ways to try DeepSeek-R1
If you wish to try the design out without setting up anything you can do so using chat.deepseek.com-you'll need to produce an account (check in with Google, use an email address or offer a Chinese +86 phone number) and after that choose the "DeepThink" choice below the timely input box.
DeepSeek provide the model by means of their API, utilizing an OpenAI-imitating endpoint. You can access that through LLM by dropping this into your extra-openai-models. yaml configuration file:
Then run llm keys set deepseek and paste in your API secret, then use llm -m deepseek-reasoner 'timely' to run triggers.
This will not reveal you the thinking tokens, unfortunately. Those are served up by the API (example here) however LLM doesn't yet have a method to display them.