Important context and a good decision
4chan at least had a consistent brand of being the anti-social network and being full of Nazis, weirdos, pedophiles and people who are just anti-social for the lulz. You couldn’t ruin 4chan.
Twitter’s image was being the “internet town-square for serious thinkers” with politicians, scientists, journalists and a small but good measure of standard shitposters. Loosing that brand diminishes it’s value massively. Unfortunately neither Bluesky nor Mastodon was able to catch that clientele yet.
It’s the famous “As long as your not Google, Amazon or Apple” licence.
For a user without much technical experience using a ready-made gui like Jan.ai with automatic model download and ability to run models with the ggml library on consumer grade hardware like mac M-series chips or cheap GPUs by either Nvidia or AMD is probably a good start.
For a little bit more technically proficient users Ollama is probably a great choice to start to host your own OpenAI-like API for local models. I mostly run gemma2 or small llama 3.1 like models with that.
The market will segment away from the current tech anyway. CATL Sodium-ion with comparatively low densities but also extremely low prices per kWh will likely win the low-end market and the market for stationary solutions. This is just due to the much lower resource costs. The high-end will be up for things like this battery by Samsung (or other comparable pilot products). The current technology will likely be in a weird middle spot.
I have a ten-year old MacBook Pro with an i7 and 16gb of ram. Just because this thing was a total beast when it was new does not mean it isn’t old now. works great with Ubuntu though. It’s still not a good idea to run it as a server though. My raspberry pi consumes a lot less energy for some basic web hosting tasks. I only use the old MBP to run memory intense docker containers like openrouteservice and I guess just using some hosting service for that would not be much more expensive.
Depends on what you do with it. Synthetic data seems to be really powerful if it’s human controlled and well built. Stuff like tiny stories (simple llm-generated stories that only use the complexity of a 3-year olds vocabulary) can be used to make tiny language models produce sensible English output. My favourite newer example is the base data for AlphaProof (llm-generated translations of proofs in Math-Papers to the proof-validation system LEAN) to teach an LLM the basic structure of Mathematics proofs. The validation in LEAN itself can be used to only keep high-quality (i.e. correct) proofs. Since AlphaProof is basically a reinforcement learning routine that uses an llm to generate good ideas for proof steps to reduce the size of the space of proof steps, applying it yields new correct proofs that can be used to further improve its internal training data.
Na this is about R
Also multiple systems included. the old base plotting from the 90s, lattice for really custom things, ggplot as the most humanely understandable and reasonable system for composing plots and great interfaces to plotly and d3.js
RCpp is crazy good. You can super easily write Cpp inline with R code, it is easy to use and half of the tidyverse is written in it.
The LFS users are still busy growing their coffee plant