Candle Cookbook

🚀 The Mission:

Democratize access to state of the art AI models.

🌏 The Principles:

🫱🏾‍🫲🏼 Trust ~ ethically sourced data and end-to-end transparency.

🔒 Privacy ~ secure, on-device inference without data sharing.

🌱 Sustainability ~ optimize efficiency to minimize our carbon footprint.

🕯️ Start Here

Welcome! Get familiar with Candle Cookbook by going through some of our favourite introductory tutorials

We also recommend getting familiar with the Official Candle framework and User Guide

🌱 Contributing

We welcome contributions from anyone who aligns with Our Mission and Our Principles.

To get started as a contributor:

🍳 The Recipes:

Minimum requirements for GPU targetted binaries

For CUDA enabled builds using --features cuda:

For cuDNN optimized builds using --features cuda, cudnn:

Verify CUDA/cuDNN:

# Verify CUDA
nvidia-smi --query-gpu=compute_cap --format=csv
nvcc --version

# Verify cuDNN
whereis cudnn.h

⚠️ IMPORTANT:

AWS/Azure builds may incur charges. It is your responsibility to understand the associated resource costs. Please review the useage rates accordingly, as we are not liable for any charges incurred from the utilization of these platforms/services.

🛣️ Roadmap

Our goal is to document each stage of a fully transparent LLM development cycle

  • Publish MVP Candle Cookbook
  • Ethically source and construct an openly available LLM dataset
  • Build a Candle-based LLM from scratch
  • Customize LLM with finetuning
  • CI/CD deployment of LLM

🧑‍🍳 Our Team:

Get to know our Community Leaders