Welcome to LlamaParse, the first public-facing release in LlamaCloud! To get started, head to cloud.llamaindex.ai. Login with the method of your choice.
You should now see our welcome screen. There’s two things you can do here: use LlamaParse, or create an index and optimize it in the playground (invitation-only beta, get in touch if you’d like to try it out).
The point of our new LlamaParse UI is to demonstrate our new, industry-leading PDF parsing capabilities before you integrate them into your code. Let’s go ahead and click “Parse”.
You now have three options: you can use LlamaParse in the UI, in Python, or as a standalone REST API that you can call from any language. We’ll take a look at all three! Let’s start with the UI method: just drag and drop any PDF into the grey box.
We started with a simple, PDF printout of the Wikipedia page for “Canada”. We immediately get a preview of the PDF, and the parsing process starts. Depending on the size of your PDF this can take a while.
If you scroll down past the PDF preview, you’ll see the results. The parser turns the PDF into Markdown, which LLMs are much more easily able to understand. Our parser understands embedded tables, will automatically parse text out of images, and handles a huge range of fonts.
Now let’s try the second option: using LlamaParse from Python using the llama-parse
package. If you click into the “Use with LlamaIndex” section you’ll see basic instructions, but we’ll walk you through it.
First, we’ll need an API key. Click “API Key” down in the bottom left, and click “Generate New Key”:
Pick a name for your key and click “Create new key”, then copy the key that’s generated. You won’t have a chance to copy your key again!
Put your key in a .env
file in the form LLAMA_CLOUD_API_KEY=llx-xxxxxx
. If you lose your key, you can always revoke it and create a new one.