LUP 525: Beating Apple to the Sauce¶
- Air Date: 2023-08-27
- Duration: 72 mins 20 secs
About this episode¶
We daily drive Asahi Linux on a MacBook, chat about how the team beat Apple to a major GPU milestone, and an easy way to self-host open-source ChatGPT alternatives.
Your hosts¶
Sponsored by¶
- Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices!
- Linode Cloud Hosting: A special offer for all Linux Unplugged Podcast listeners and new Linode customers, visit linode.com/unplugged, and receive $100 towards your new account.
- Kolide: Kolide is a device trust solution for companies with Okta, and they ensure that if a device isn’t trusted and secure, it can’t log into your cloud apps.
Episode links¶
- 🎉 Alby — Boost into the show, first grab Alby, top it off, and then head over to the Podcast Index.
- ⚡️ LINUX Unplugged on the Podcastindex.org — You can boost from the web. Once Alby is topped off, visit our page on the Podcast Index.
- Hector Martin's Controversial Question — Would you be okay with us adding some really trivial telemetry to the Asahi installer?
- Berlin with Brent — Brent will be back in Berlin for the Nextcloud Conference and can't get enough of Berlin Meetups! Friday, September 8th, 6 PM.
- Fedora Asahi Remix
- Fedora Asahi Remix Coming For Fedora Linux On Apple Silicon Hardware — Fedora Asahi Remix will be their new flagship distribution for providing a polished Linux experience on Apple Silicon.
- Fedora Asahi Remix: bringing Fedora to Apple Silicon Macs (Flock To Fedora 2023)
- Our new flagship distro: Fedora Asahi Remix — We’re still working out the kinks and making things even better, so we are not quite ready to call this a release yet. We aim to officially release the Fedora Asahi Remix by the end of August 2023. Look forward to many new features, machine support, and more!
- Hector Martin: “Okay, I’m going to be honest…” — I apologize to all Asahi Linux users. You deserve better. When I chose Arch Linux ARM as a base I didn't realize it would have so many basic QA issues.
- Coming soon: Fedora for Apple Silicon Macs! (Fedora Discourse)
- The first conformant M1 GPU driver — Our reverse-engineered, free and open source graphics drivers are the world’s only conformant OpenGL ES 3.1 implementation for M1- and M2-family graphics hardware. That means our driver passed tens of thousands of tests to demonstrate correctness and is now recognized by the industry.
- Asahi Linux’s Apple M1/M2 Gallium3D Driver Now OpenGL ES 3.1 Conformant — It's even more rewarding for the community developers in that Apple doesn't provide any conformant (OpenGL or Vulkan) graphics drivers for their Arm-based platform.
- Feature Support · AsahiLinux/docs Wiki
- Switch to the kernel-16k variant - Fedora Discussion
- NixOS: Unlocking your LUKS via SSH and Tor
- StreetComplete — Easy to use OpenStreetMap editor for Android.
- getumbrel/llama-gpt: A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device.
- serge-chat/serge: A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.
- liltom-eth/llama2-webui: Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use
- llama.cpp — Port of Facebook’s LLaMA model in C/C++
- Llama2.c — Inference Llama 2 in one file of pure C
- Koboldcpp — A simple one-file way to run various GGML models with KoboldAI’s UI
- lollms-webui — Lord of Large Language Models Web User Interface
- LM Studio — Discover, download, and run local LLMs
- text-generation-webui — A Gradio web UI for Large Language Models. Supports transformers, GPTQ, llama.cpp (ggml/gguf), Llama models.
- A comprehensive guide to running Llama 2 locally — Code Llama is a state-of-the-art LLM capable of generating code, and natural language about code, from both code and natural language prompts.
- Meta Releases Code Llama, a Coding Version of Llama 2
- Introducing Code Llama, a state-of-the-art large language model for coding
- Llama and ChatGPT Are Not Open-Source
- Meta launches Llama 2, a source-available AI model that allows commercial applications — A family of pretrained and fine-tuned language models in sizes from 7 to 70 billion parameters.
- Meta’s Llama 2 is not open source — Meta's newly released large language model Llama 2 is not open source.
Tags¶
16k kernel, 16k pages, ai, alyssa rosenzweig, apple, apple silicon, arch arm, arm, arm64, asahi linux, battery life, btrfs, chatgpt, conformant gpu driver, data center, data loss, davide calvalca, disk encryption, dual booting, fedora, fedora asahi remix, filesystem, gallium3d, gpu, gpu acceleration, hector martin, hpc, immutability, impermanence, jbod, jitsi meet, jupiter broadcasting, kde, kde connect, linux podcast, linux unplugged, llama 2, llama-gpt, llama.cpp, llm, luks, lvm, m1, m2, mac mini, macos, mattermost, meta, ml, neal gompa, nixos, oneplus 6, open source ai, openai, opengl es 3.1, openstreetmap, opensuse tumbleweed, organic maps, plasma, rdp, self-hosting, server temperature, sip, snapdragon 845, streetcomplete, telemetry, thunderbolt, uefi, umbrel, vnc, voip, xfs, xfs_repair, zfs, 🦙