Quick start guide to get LlamaIndex running in under 5 minutes.
pip install llama-index
export OPENAI_API_KEY="your-api-key-here"
mkdir data
# Put your PDF, TXT, or MD files in the data/ folder
Create app.py:
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
# Load documents from the data/ folder
documents = SimpleDirectoryReader("data").load_data()
# Create a vector index
index = VectorStoreIndex.from_documents(documents)
# Create a query engine
query_engine = index.as_query_engine()
# Ask questions
response = query_engine.query("What is this document about?")
print(response)
python app.py
Includes core + selected integrations:
pip install llama-index
Install only what you need:
# Core package only
pip install llama-index-core
# Add specific integrations
pip install llama-index-llms-openai
pip install llama-index-embeddings-openai
pip install llama-index-vector-stores-chroma
poetry add llama-index
Save as rag_app.py:
import os
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, StorageContext
# Set your API key
os.environ["OPENAI_API_KEY"] = "sk-..."
# Load documents
documents = SimpleDirectoryReader("./data").load_data()
print(f"Loaded {len(documents)} documents")
# Create index
index = VectorStoreIndex.from_documents(documents)
# Save to disk (optional)
index.storage_context.persist(persist_dir="./storage")
# Create query engine
query_engine = index.as_query_engine()
# Ask questions
while True:
question = input("\nYour question: ")
if question.lower() in ['quit', 'exit']:
break
response = query_engine.query(question)
print(f"Answer: {response}")
Run it:
python rag_app.py
from llama_index.core import StorageContext, load_index_from_storage
# Load from disk
storage_context = StorageContext.from_defaults(persist_dir="./storage")
index = load_index_from_storage(storage_context)
# Query as before
query_engine = index.as_query_engine()
Choose your deployment method:
Setting up RAG systems can be complex. We offer consulting services for:
Contact us at office@linux-server-admin.com or visit our contact page.