tokens int64 182 43.1k | doc_id stringlengths 36 36 | name stringlengths 6 64 | url stringlengths 42 109 | retrieve_doc bool 2
classes | source stringclasses 1
value | content stringlengths 801 96.2k |
|---|---|---|---|---|---|---|
1,869 | f2a017cd-c6a2-4611-b722-10951ad23a91 | Welcome to LlamaIndex 🦙 ! | https://docs.llamaindex.ai/en/stable/index | true | llama_index | <script src="https://cdn.jsdelivr.net/npm/marked/marked.min.js"></script>
# Welcome to LlamaIndex 🦙 !
LlamaIndex is a framework for building context-augmented generative AI applications with [LLMs](https://en.wikipedia.org/wiki/Large_language_model).
<div class="grid cards" markdown>
- <span style="font-size: 200... |
979 | 4ce1a9a2-e91a-47ae-9cbe-0566b5db3acb | Building an LLM application | https://docs.llamaindex.ai/en/stable/understanding/index | true | llama_index | # Building an LLM application
Welcome to the beginning of Understanding LlamaIndex. This is a series of short, bite-sized tutorials on every stage of building an LLM application to get you acquainted with how to use LlamaIndex before diving into more advanced and subtle strategies. If you're an experienced programmer ... |
182 | 5b64e132-a551-4e6f-9c95-2606810cae8c | Privacy and Security | https://docs.llamaindex.ai/en/stable/understanding/using_llms/privacy | true | llama_index | # Privacy and Security
By default, LLamaIndex sends your data to OpenAI for generating embeddings and natural language responses. However, it is important to note that this can be configured according to your preferences. LLamaIndex provides the flexibility to use your own embedding model or run a large language model... |
869 | 7be87819-70df-4a9c-b558-ea795bb332d3 | Using LLMs | https://docs.llamaindex.ai/en/stable/understanding/using_llms/using_llms | true | llama_index | # Using LLMs
!!! tip
For a list of our supported LLMs and a comparison of their functionality, check out our [LLM module guide](../../module_guides/models/llms.md).
One of the first steps when building an LLM-based application is which LLM to use; you can also use more than one if you wish.
LLMs are used at mult... |
363 | 888d853a-1b0c-4456-b289-be9ed2c89c2a | LlamaHub | https://docs.llamaindex.ai/en/stable/understanding/loading/llamahub | true | llama_index | # LlamaHub
Our data connectors are offered through [LlamaHub](https://llamahub.ai/) 🦙.
LlamaHub contains a registry of open-source data connectors that you can easily plug into any LlamaIndex application (+ Agent Tools, and Llama Packs).

## Usage Pattern
Get started ... |
1,418 | 88e2611e-eb6e-43c2-97bf-9252717a0a56 | Loading Data (Ingestion) | https://docs.llamaindex.ai/en/stable/understanding/loading/loading | true | llama_index | # Loading Data (Ingestion)
Before your chosen LLM can act on your data, you first need to process the data and load it. This has parallels to data cleaning/feature engineering pipelines in the ML world, or ETL pipelines in the traditional data setting.
This ingestion pipeline typically consists of three main stages:
... |
581 | 81066675-5d92-4073-853a-02f7605ce032 | Evaluating | https://docs.llamaindex.ai/en/stable/understanding/evaluating/evaluating | true | llama_index | # Evaluating
Evaluation and benchmarking are crucial concepts in LLM development. To improve the performance of an LLM app (RAG, agents), you must have a way to measure it.
LlamaIndex offers key modules to measure the quality of generated results. We also offer key modules to measure retrieval quality. You can learn ... |
492 | 94a22f57-ea69-4559-926d-77f80c448b7e | Usage Pattern | https://docs.llamaindex.ai/en/stable/understanding/evaluating/cost_analysis/usage_pattern | true | llama_index | # Usage Pattern
## Estimating LLM and Embedding Token Counts
In order to measure LLM and Embedding token counts, you'll need to
1. Setup `MockLLM` and `MockEmbedding` objects
```python
from llama_index.core.llms import MockLLM
from llama_index.core import MockEmbedding
llm = MockLLM(max_tokens=256)
embed_model = M... |
885 | 20ea3cb9-4145-4805-887e-7c48f1333c04 | Cost Analysis | https://docs.llamaindex.ai/en/stable/understanding/evaluating/cost_analysis/index | true | llama_index | # Cost Analysis
## Concept
Each call to an LLM will cost some amount of money - for instance, OpenAI's gpt-3.5-turbo costs $0.002 / 1k tokens. The cost of building an index and querying depends on
- the type of LLM used
- the type of data structure used
- parameters used during building
- parameters used during quer... |
710 | 90154ae9-1d90-4442-a9b3-5bedaba0074c | Agents with local models | https://docs.llamaindex.ai/en/stable/understanding/agent/local_models | true | llama_index | # Agents with local models
If you're happy using OpenAI or another remote model, you can skip this section, but many people are interested in using models they run themselves. The easiest way to do this is via the great work of our friends at [Ollama](https://ollama.com/), who provide a simple to use client that will ... |
971 | 9830872c-c9b8-4b01-9518-9a1fa6c14821 | Adding RAG to an agent | https://docs.llamaindex.ai/en/stable/understanding/agent/rag_agent | true | llama_index | # Adding RAG to an agent
To demonstrate using RAG engines as a tool in an agent, we're going to create a very simple RAG query engine. Our source data is going to be the [Wikipedia page about the 2023 Canadian federal budget](https://en.wikipedia.org/wiki/2023_Canadian_federal_budget) that we've [printed as a PDF](htt... |
559 | 8df3083f-e2ae-48de-b70c-82b0213e5af4 | Enhancing with LlamaParse | https://docs.llamaindex.ai/en/stable/understanding/agent/llamaparse | true | llama_index | # Enhancing with LlamaParse
In the previous example we asked a very basic question of our document, about the total amount of the budget. Let's instead ask a more complicated question about a specific fact in the document:
```python
documents = SimpleDirectoryReader("./data").load_data()
index = VectorStoreIndex.from... |
793 | c8371e03-8cc7-4a36-b589-27a79fad6c81 | Memory | https://docs.llamaindex.ai/en/stable/understanding/agent/memory | true | llama_index | # Memory
We've now made several additions and subtractions to our code. To make it clear what we're using, you can see [the current code for our agent](https://github.com/run-llama/python-agents-tutorial/blob/main/5_memory.py) in the repo. It's using OpenAI for the LLM and LlamaParse to enhance parsing.
We've also ad... |
983 | 105b26c9-8f71-4dbb-915e-3c10c5105353 | Adding other tools | https://docs.llamaindex.ai/en/stable/understanding/agent/tools | true | llama_index | # Adding other tools
Now that you've built a capable agent, we hope you're excited about all it can do. The core of expanding agent capabilities is the tools available, and we have good news: [LlamaHub](https://llamahub.ai) from LlamaIndex has hundreds of integrations, including [dozens of existing agent tools](https:... |
1,197 | e539dfa2-9a44-42a8-aa53-598e47a4b591 | Building a basic agent | https://docs.llamaindex.ai/en/stable/understanding/agent/basic_agent | true | llama_index | # Building a basic agent
In LlamaIndex, an agent is a semi-autonomous piece of software powered by an LLM that is given a task and executes a series of steps towards solving that task. It is given a set of tools, which can be anything from arbitrary functions up to full LlamaIndex query engines, and it selects the bes... |
1,069 | 37983b44-ac28-44e2-b2a8-455df06ee13b | Storing | https://docs.llamaindex.ai/en/stable/understanding/storing/storing | true | llama_index | # Storing
Once you have data [loaded](../loading/loading.md) and [indexed](../indexing/indexing.md), you will probably want to store it to avoid the time and cost of re-indexing it. By default, your indexed data is stored only in memory.
## Persisting to disk
The simplest way to store your indexed data is to use the... |
397 | 5f60c10c-560d-47ff-87c3-228f49a478c0 | Tracing and Debugging | https://docs.llamaindex.ai/en/stable/understanding/tracing_and_debugging/tracing_and_debugging | true | llama_index | # Tracing and Debugging
Debugging and tracing the operation of your application is key to understanding and optimizing it. LlamaIndex provides a variety of ways to do this.
## Basic logging
The simplest possible way to look into what your application is doing is to turn on debug logging. That can be done anywhere in... |
899 | 5b253e54-efac-4382-b5a5-7462cefcbce2 | Indexing | https://docs.llamaindex.ai/en/stable/understanding/indexing/indexing | true | llama_index | # Indexing
With your data loaded, you now have a list of Document objects (or a list of Nodes). It's time to build an `Index` over these objects so you can start querying them.
## What is an Index?
In LlamaIndex terms, an `Index` is a data structure composed of `Document` objects, designed to enable querying by an L... |
1,494 | 92a2e347-69c9-4c40-85bf-65093eb36b46 | Querying | https://docs.llamaindex.ai/en/stable/understanding/querying/querying | true | llama_index | # Querying
Now you've loaded your data, built an index, and stored that index for later, you're ready to get to the most significant part of an LLM application: querying.
At its simplest, querying is just a prompt call to an LLM: it can be a question and get an answer, or a request for summarization, or a much more c... |
399 | 906509df-1a70-4ab8-9df2-68aee062407c | Putting It All Together | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/index | true | llama_index | # Putting It All Together
Congratulations! You've loaded your data, indexed it, stored your index, and queried your index. Now you've got to ship something to production. We can show you how to do that!
- In [Q&A Patterns](q_and_a.md) we'll go into some of the more advanced and subtle ways you can build a query engin... |
1,084 | bf31b6c1-15db-4298-aacf-793390f87cb0 | Agents | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/agents | true | llama_index | # Agents
Putting together an agent in LlamaIndex can be done by defining a set of tools and providing them to our ReActAgent implementation. We're using it here with OpenAI, but it can be used with any sufficiently capable LLM:
```python
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai imp... |
5,652 | 8dada3ca-6484-4531-8f3d-cf97f6b9fcd9 | A Guide to Extracting Terms and Definitions | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/q_and_a/terms_definitions_tutorial | true | llama_index | # A Guide to Extracting Terms and Definitions
Llama Index has many use cases (semantic search, summarization, etc.) that are well documented. However, this doesn't mean we can't apply Llama Index to very specific use cases!
In this tutorial, we will go through the design process of using Llama Index to extract terms ... |
1,871 | 86e843c6-0a02-4475-84f3-0daaee761aeb | Q&A patterns | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/q_and_a/index | true | llama_index | # Q&A patterns
## Semantic Search
The most basic example usage of LlamaIndex is through semantic search. We provide a simple in-memory vector store for you to get started, but you can also choose to use any one of our [vector store integrations](../../community/integrations/vector_stores.md):
```python
from llama_in... |
3,639 | 0a9fdd80-bd50-41e1-86b6-4dddbefd25f0 | Airbyte SQL Index Guide | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/structured_data/Airbyte_demo | true | llama_index | # Airbyte SQL Index Guide
We will show how to generate SQL queries on a Snowflake db generated by Airbyte.
```python
# Uncomment to enable debugging.
# import logging
# import sys
# logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout)... |
1,389 | 2ed4f255-948b-40be-8d07-7a07057fa10e | Structured Data | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/structured_data/index | true | llama_index | # Structured Data
# A Guide to LlamaIndex + Structured Data
A lot of modern data systems depend on structured data, such as a Postgres DB or a Snowflake data warehouse.
LlamaIndex provides a lot of advanced features, powered by LLM's, to both create structured data from
unstructured data, as well as analyze this stru... |
4,506 | 3b04b376-b99a-40a3-96f6-571a5dda5fcb | How to Build a Chatbot | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/chatbots/building_a_chatbot | true | llama_index | # How to Build a Chatbot
LlamaIndex serves as a bridge between your data and Large Language Models (LLMs), providing a toolkit that enables you to establish a query interface around your data for a variety of tasks, such as question-answering and summarization.
In this tutorial, we'll walk you through building a cont... |
3,667 | 874edc9f-5575-4c23-a772-908223caa446 | A Guide to Building a Full-Stack Web App with LLamaIndex | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/apps/fullstack_app_guide | true | llama_index | # A Guide to Building a Full-Stack Web App with LLamaIndex
LlamaIndex is a python library, which means that integrating it with a full-stack web application will be a little different than what you might be used to.
This guide seeks to walk through the steps needed to create a basic API service written in python, and... |
182 | d4157c1a-a595-4350-9ba4-63e0e92e2984 | Full-Stack Web Application | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/apps/index | true | llama_index | # Full-Stack Web Application
LlamaIndex can be integrated into a downstream full-stack web application. It can be used in a backend server (such as Flask), packaged into a Docker container, and/or directly used in a framework such as Streamlit.
We provide tutorials and resources to help you get started in this area:
... |
End of preview. Expand in Data Studio
No dataset card yet
- Downloads last month
- 34