Skip to content



GitHub Repo stars EmbedChain is a framework to easily create LLM powered bots over any dataset.


    import os

    from embedchain import Llama2App


    zuck_bot = Llama2App()

    # Embed your data
    zuck_bot.add("youtube_video", "")
    zuck_bot.add("web_page", "")

    # Nice, your bot is ready now. Start asking questions to your bot.
    zuck_bot.query("Who is Mark Zuckerberg?")
    # Answer: Mark Zuckerberg is an American internet entrepreneur and business magnate. He is the co-founder and CEO of Facebook.

GitHub Repo stars txtai 'is an all-in-one embeddings database for semantic search, LLM orchestration and language model workflows.


  • Flowise
  • Chain Forge A data flow prompt engineering environment for evaluating ana analyzing LLM responses
  • llm-chain ChatGPT and Alpaca support. Agentic with bash commands.n
  • Agent Flow
  • Auto Chain
  • Chatall To interact with multiple chatbots at the same time.
  • LocalAI drop-in replacement REST API that’s compatible with OpenAI API specifications for local inferencing.

GitHub Repo stars Open Agent IN DEVELOPMENT Microservices approach to AGI. Modular components for AI apps or AGI agents

GitHub Repo stars DSPY is a framework for solving advanced tasks with language models and retrieval models

Useful for exploring automatic prompt opteimization.

Language-like interfaces

GitHub Repo stars LMQL is a query language that enables simplified representations of chats and agents with minimal code.
"Greet LMQL:[GREETINGS]\n" where stops_at(GREETINGS, ".") and not "\n" in GREETINGS

if "Hi there" in GREETINGS:
    "Can you reformulate your greeting in the speech of \
     victorian-era English: [VIC_GREETINGS]\n" where stops_at(VIC_GREETINGS, ".")

"Analyse what part of this response makes it typically victorian:\n"

for i in range(4):
    "-[THOUGHT]\n" where stops_at(THOUGHT, ".")

"To summarize:[SUMMARY]"

Control libraries

  • Guidance
  • RELM
  • Outlines

Retrieval Augmentation focus

GitHub Repo stars is a framework that helps you evaluate your Retrieval Augmented Generation (RAG) RAGAS pipelines

Monitoring Gen()AI

For reasons related to quality, ethics, and regulation, it is both useful, and at times required, to record both inputs, and outputs from an LLM. Particularly in systems that may be used in non low-risk settings, monitoring is an essential component of Gen()AI. Also known as LLM observability, monitoring can people-in-the-loop, as well as automated systems to observe and adapt the system to both inputs and outputs that are undesired or dangerous.


Llama ecosystem

GitHub Repo stars Llamaindex Provides an orchestration framework for with multiple connectors

GitHub Repo stars Llama Lab enables flexible tools to use and indesx various tools

Llama is a library and set of models that has an expanding community due to the generally open-source nature of high-quality Llama 2 model.

??? abstract "GitHub Repo stars LLaMA2-Accessory: An Open-source Toolkit for LLM Development 🚀 🚀LLaMA2-Accessory is an open-source toolkit for pretraining, finetuning and deployment of Large Language Models (LLMs) and multimodal LLMs. This repo is mainly inherited from LLaMA-Adapter with more advanced features.🧠 image

Code and models surrounding Llama


GitHub Repo stars Haystack is an e2e llm orchestration framework that allows a number of versatile interactions.

Open source by DeepSet Designed for scaleable search and retrieval Evaluation pipelines for system eval Deployable as REST API


GitHub Repo stars Griptape an enterprise alternative to Langchain

Open source / managemed Commercial Support Optimized for scalability and cloud Encryption, access control, security