Fix AttributeError: 'NoneType' object has no attribute 'model_extra'
* cost = chunk.usage.model_extra.get("estimated_cost", 0) if hasattr(chunk, "usage") and chunk.usage else 0 # Estimated costs returned by DeepInfra API
- Encode article urls in filename indexed in Khoj KB
Makes it easier for humans to compare, trace retrieval performance
by looking at logs than using content hash (which was previously
explored)
- Format AI response to send in automation email
- Let Khoj infer chat query based on user automation query
- Decide if automation emails should be sent based on response
- Fix the `to_notify_or_not` decider AI
- Ask reason before decision to improve to_notify decider AI
- Show error message on web app when fail to create/update automation
Previously we sent the AI response directly. This change post
processes the AI response with the awareness that it is to be sent to
the user as an email to improve rendering and quality of the emails.
This tries to decouple the automation query from the chat query. So
the chat model doesn't have to know it is running in an automation
context or figure how to notify user or send automation response. It
just has to respond to the AI generated `query_to_run' corresponding
to the `scheduling_request` automation by the user.
For example, a `scheduling_request' of `notify me when X happens'
results in the automation calling the chat api with a `query_to_run`
like `tell me about X` and deciding if to notify based on information
gathered about X from the scheduled run. If these two are not
decoupled, the chat model may respond with how it can notify about X
instead of just asking about it.
Swap query_to_run with scheduling_request on the automation web page
- Rather than chunky generic cards, make the suggested actions more action oriented, around the problem a user might want to solve. Give them follow-up options. Design still in progress.
- De facto, was being assumed everywhere if authenticatedData is null, that it's not logged in. This isn't true because the data can still be loading. Update the hook to send additional states.
- Bonus: Delete model picker code and a slew of unused imports.
- Add support for seeing all steps of the agent modification flow via tabs at the top of the modal
- Separate knowledge base & tool selection into two separate parts
Use the [shadcn sidebar](https://ui.shadcn.com/docs/components/sidebar#sidebarmenusub) across Khoj and standardize how the side panel experience works across the app. This helps us generalize the code better, while re-using the same components.
Deprecates current usage of the chat history side panel, replacing it with the new `appSidebar.tsx` component.
We'll eventually move out the `Manage Files` section and move it into a separate panel dedicated to chat-level controls.
- Rather than chunky generic cards, make the suggested actions more action oriented, around the problem a user might want to solve. Give them follow-up options. Design still in progress.
- De facto, was being assumed everywhere if authenticatedData is null, that it's not logged in. This isn't true because the data can still be loading. Update the hook to send additional states.
- Bonus: Delete model picker code and a slew of unused imports.
New conversation have a slug, but older conversation may not. This
change allows those older conversations to still be shareable by using
a random uuid for constructing their url instead
- Some of the instructions were duplicated (e.g illegal, harmful)
- Return format requested was inconsistent
- Safety prompt felt overtly prudish which lowered their utility.
Make it laxer for now, add checks later if required
- Print hash in CI to ease verifying ci built python package matches
khoj package published on pypi
- Newer pypi publish github action should speed up workflow by ~30s
Major
---
Previously we couldn't enable research mode or use other slash
commands in automated tasks.
This change separates determining if a chat query is triggered via
automated task from the (other) conversation commands to run the query
with.
This unlocks the ability to enable research mode in automations
apart from other variations like /image or /diagram etc.
Minor
---
- Clean the code to get data sources and output formats
- Have some examples shown on automations page run in research mode now
This allows online search to work out of the box again
for self-hosting users, as no auth/api key setup required.
Docker users do not need to change anything in their setup flow.
Direct installers can setup Searxng locally or use public instances if
they do not want to use any of the other providers (like Jina, Serper)
Resolves#749. Resolves#990
This allows online search to work out of the box again for
self-hosting users, as no auth/api key setup required.
Docker users do not need to change anything in their setup flow.
Direct installers can setup searxng locally or use public instances if
they do not want to use any of the other providers (like Jina, Serper)
Resolves#749. Resolves#990
The welcome, feedback and automation emails were still using the Khoj
domain, which wouldnt work for self-hosted users with their RESEND key
Resolves#1004
- Previous was incorrectly plural but was defining only a single model
- Rename chat model table field to name
- Update documentation
- Update references functions and variables to match new name
- Move khoj message border to left like in web ui
- Remove user, sender emojis and name
- Ensure conversation title always at top of chat sessions view,
even if no chat sessions loaded yet, instead of causing layout shift
on chat sessions load
Improve handling of multiple output modes
- Use the generated descriptions / inferred queries to supply context to the model about what it's created and give a richer response
- Stop sending the generated image in user message. This seemed to be confusing the model more than helping.
- Collect generated assets in a structured objects to provide model context. This seems to help it follow instructions and separate responsibility better
- Also, rename the open ai converse method to converse_openai to follow patterns with other providers
- Use the generated descriptions / inferred queries to supply context to the model about what it's created and give a richer response
- Stop sending the generated image in user message. This seemed to be confusing the model more than helping.
- Also, rename the open ai converse method to converse_openai to follow patterns with other providers
Previously id were used (by default) for model display strings.
This made it hard to select chat model options, server chat settings
etc. in the admin panel dropdowns.
This change uses more recognizable names for the DB objects to ease
selection in dropdowns and display in general on the admin panel.
* Rename OpenAIProcessorConversationConfig to more apt AiModelAPI
The DB model name had drifted from what it is being used for,
a general chat api provider that supports other chat api providers like
anthropic and google chat models apart from openai based chat models.
This change renames the DB model and updates the docs to remove this
confusion.
Using Ai Model Api we catch most use-cases including chat, stt, image generation etc.
Simplify the desktop app
- Make the desktop app mainly a file-syncing client for users who have lots of documents that they need to share with Khoj. This is because the web app provides a fairly robust chat client which can be used by anyone on their computer.
- The chat client in the desktop app had significantly drifted from our current brand / them, and didn't provide enough value add to update. Later, we will make it easier to install the existing web app as a desktop PWA.
Currently, Khoj has terminal states with respect to what assets it outputs. We limit it to image, text, and excalidraw diagram. This limitation is unnecessary and provides undue constraints for creating more dynamic, diverse experiences. For instance, we may want the chat view to morph for document editing or generation, in which case having limited output modes would be a detriment.
This change allows us to emit generated assets and then continue on to more text generation in final response. It forces text response for all messages. It adds a new stream event, GENERATED_ASSETS, which holds content that the AI is emitting in response to the query.
Overview
- The default django admin panel UI looks pretty dated and didn't
have any Khoj specific branding
- Used the Unfold Django admin panel theme for a modern look
- Used the Khoj logo and name in Admin panel title, headings, favicons
Details:
All models shown on Admin panel need to inherit from unfold's
ModelAdmin to get styling applied. So
- Make all models on Admin panel inherit from unfold's ModelAdmin
- Subclassed UserAdmin to inherit from unfold's ModelAdmin
- Deregistered the unused Auth Group model from the Admin panel
We can add it back when its actually used. Avoid confusion for now
- Explicitly register DjangoJobExecution on admin panel and again make
it inherit from the unfold.admin.ModelAdmin
- Make the desktop app mainly a file-syncing client for users who have lots of documents that they need to share with Khoj. This is because the web app provides a fairly robust chat client which can be used by anyone on their computer.
- The chat client in the desktop app had significantly drifted from our current brand / them, and didn't provide enough value add to update. Later, we will make it easier to install the existing web app as a desktop PWA.
- Use bubblewrap generated splash screen, notification icons from
1200x1200 high res khoj icon in assets.khoj.dev.
- Discard bubblewrap generated launcher icons.
- Fix orientation to respect phone orientation settings. "any" was not.
- Add 512, 192 Khoj maskable icons to web app manifest for android rendering
- Add id, categories etc suggested by pwabuilder
- Use higher quality icon images for splash screen than what
bubblewrap creates by default
Encode api key in header, POST request body and GET query param for
search as utf-8 to avoid the multibyte char in request issue when
making API calls from khoj.el to khoj server.
Resolves#935
### Issue
- Path with / are converted to \\ on Windows using the `Path' operator.
- The `markdown_to_entries' module was trying to normalize file paths with`Path' for some reason.
This would store the file paths in DB Entry differently than the file to entries map if Khoj ran on Windows.
That'd result in a KeyError when trying to look up the entry file path from `file_to_text_map' in the `text_to_entries:update_embeddings()' function.
### Fix
- Removing the unnecessary OS dependent Path normalization in `markdown_to_entries' should keep the file path storage consistent across `file_to_text_map' var, `FileObjectAdaptor', `Entry' DB tables on Windows for Markdown files as well.
This issue will affect users hosting Khoj server on Windows and attempting to index markdown files.
Resolves#984
- Add type hints to improve maintainability of stabilzed indexing code
- It shouldn't be necessary to wrap string variables in an f-string
This change aims to improve code quality. It should not affect
functionality.
Issue
- Path with / are converted to \\ on Windows using the Path operator.
- The markdown to entries method for some reason was doing this.
This would store the file paths in DB entry differently than the file
to entries map. Resulting in a KeyError when trying to look up the
entry file path from file_to_text_map in the
text_to_entries:update_embeddings() function.
Fix
- Removing the unnecessary OS dependendent Path normalization in
markdown_to_entries should keep the file path storage consistent
across file_to_text_map var, FileObjectAdaptor, Entry DB tables on
Windows for Markdown files as well
This issue would only affect users hosting Khoj server on Windows and
attempting to index markdown files.
Resolves#984
- Added it to the Django migrations so that it auto-triggers when someone updates their server and starts it up again for the first time. This will require that they update their clients as well in order to view/consume image content.
- Remove server-side references in the code that allow to parse the text-to-image intent as it will no longer be necessary, given the chat logs will be migrated
This avoid unnecessarily throwing an internal server error when the
user tries to sign-up using multiple mechanisms (e.g first by email, then
by google oauth)
- Improve escaping to load complex json objects
- Fallback to a more forgiving [json5](https://json5.org/) loader if `json.loads` cannot parse complex json str
This should reduce failures to pick research tool and run code by agent
JSON5 spec is more flexible, try to load using a fast json5 parser if
the stricter json.loads from the standard library can't load the
raw complex json string into a python dictionary/list
Gemini doesn't work well when trying to output json objects. Using it
to output raw json strings with complex, multi-line structures
requires more intense clean-up of raw json string for parsing
- Use pre-built wheels for torch and llama-cpp-python
- Install and link musl as it's used by llama-cpp-python pre-built
wheel instead of glibc
- Join Install git and Install Dependencies steps in pytest workflow
To remove unnecessary steps
- Building arm64 image on an ubuntu arm64 runner reduces `yarn build'
step time by 75% from 12mins to 3mins.
- This is because no QEMU emulation for arm64 on x86 is required now
- Parallelizing x64 and arm64 platform builds halves build time on top
- Revert to use standard ubuntu-latest runner as large x64 runner
doesn't give much more speed improvements
This results an effective additional 50%-66% reduction in build time
on top of #987.
So a full dockerize workflow run now takes *10 mins* vs previous 35+mins.
This is a total of *72% improvement* in max dockerize run time.
Get additional speed improvements when docker layer cache hit.
## Objective
Improve build speed and size of khoj docker images
## Changes
### Improve docker image build speeds
- Decouple web app and server build steps
- Build the web app and server in parallel
- Cache docker layers for reuse across dockerize github workflow runs
- Split Docker build layers for improved cacheability (e.g separate `yarn install` and `yarn build` steps)
### Reduce size of khoj docker images
- Use an up-to-date `.dockerignore` to exclude unnecessary directories
- Do not installing cuda python packages for cpu builds
### Improve web app builds
- Use consistent mechanism to get fonts for web app
- Make tailwind extensions production instead of dev dependencies
- Make next.js create production builds for the web app (via `NODE_ENV=production` env var)
The current fix should improve Khoj responses when charts in response
context. It truncates code context before sharing with response chat actors.
Previously Khoj would respond with it not being able to create chart
but than have a generated chart in it's response in default mode.
The truncate code context was added to research chat actor for
decision making but it wasn't added to conversation response
generation chat actors.
When khoj generated charts with code for its response, the images in
the context would exceed context window limits.
So the truncation logic to drop all past context, including chat
history, context gathered for current response.
This would result in chat response generator 'forgetting' all for the
current response when code generated images, charts in response context.
It needs to be used across routers and processors. It being in
run_code tool makes it hard to be used in other chat provider contexts
due to circular dependency issues created by
send_message_to_model_wrapper func
Previous changes to depend on just the PROMPTRACE_DIR env var instead
of KHOJ_DEBUG or verbosity flag was partial/incomplete.
This fix adds all the changes required to only depend on the
PROMPTRACE_DIR env var to enable/disable prompt tracing in Khoj.
Pass your domain cert files via the --sslcert, --sslkey cli args.
For example, to start khoj at https://example.com, you'd run command:
KHOJ_DOMAIN=example.com khoj --sslcert example.com.crt --sslkey
example.com.key --host example.com
This sets up ssl certs directly with khoj without requiring a
reverse proxy like nginx to serve khoj behind https endpoint for
simple setups. More complex setups should, of course, still use a
reverse proxy for efficient request processing
- Track, return cost and usage metrics in chat api response
Track input, output token usage and cost of interactions with
openai, anthropic and google chat models for each call to the khoj chat api
- Collect, display and store costs & accuracy of eval run currently in progress
This provides more insight into eval runs during execution
instead of having to wait until the eval run completes.
Collect, display and store running costs & accuracy of eval run.
This provides more insight into eval runs during execution instead of
having to wait until the eval run completes.
- Previously when settings list became long the dropdown height would
overflow screen height. Now it's max height is clamped and y-scroll
- Previously the dropdown content would take width of content. This
would mean the menu could sometimes be less wide than the button. It
felt strange. Now dropdown content is at least width of parent button
- Track input, output token usage and cost for interactions
via chat api with openai, anthropic and google chat models
- Get usage metadata from OpenAI using stream_options
- Handle openai proxies that do not support passing usage in response
- Add new usage, end response events returned by chat api.
- This can be optionally consumed by clients at a later point
- Update streaming clients to mark message as completed after new
end response event, not after end llm response event
- Ensure usage data from final response generation step is included
- Pass usage data after llm response complete. This allows gathering
token usage and cost for the final response generation step across
streaming and non-streaming modes
- Previously online chat actors, director tests only worked with openai.
This change allows running them for any supported onlnie provider
including Google, Anthropic and Openai.
- Enable online/offline chat actor, director in two ways:
1. Explicitly setting KHOJ_TEST_CHAT_PROVIDER environment variable to
google, anthropic, openai, offline
2. Implicitly by the first API key found from openai, google or anthropic.
- Default offline chat provider to use Llama 3.1 3B for faster, lower
compute test runs
- Set output mode to single string. Specify output schema in prompt
- Both thesee should encourage model to select only 1 output mode
instead of encouraging it in prompt too many times
- Output schema should also improve schema following in general
- Standardize variable, func name of io selector for readability
- Fix chat actors to test the io selector chat actor
- Make chat actor return sources, output separately for better
disambiguation, at least during tests, for now
- Evaluate khoj on random 200 questions from each of google frames and openai simpleqa benchmarks across *general*, *default* and *research* modes
- Run eval with Gemini 1.5 Flash as test giver and Gemini 1.5 Pro as test evaluator models
- Trigger eval workflow on release or manually
- Make dataset, khoj mode and sample size configurable when triggered via manual workflow
- Enable Web search, webpage read tools during evaluation
- JSON extract from LLMs is pretty decent now, so get the input tools and output modes all in one go. It'll help the model think through the full cycle of what it wants to do to handle the request holistically.
- Make slight improvements to tool selection indicators
Previously, we'd replace the generated message with an error message
when message generation stopped via stop button on chat page of web app.
So the partially generated message (which could be useful) gets lost.
This change just stops generation, while keeping the generated
response so any useful information from the partially generated
message can be retrieved.
- Allows managing chat models in the OpenAI proxy service like Ollama.
- Removes need to manually add, remove chat models from Khoj Admin Panel
for these OpenAI compatible API services when enabled.
- Khoj still mantains the chat models configs within Khoj, so they can
be configured via the Khoj admin panel as usual.
Previously Jina search didn't API key. Now that it does need API key,
we should re-use the API key set in the Jina web scraper config,
otherwise fallback to using JINA_API_KEY from environment variable, if
either is present.
Resolves#978
- Integrate with Ollama or other openai compatible APIs by simply
setting `OPENAI_API_BASE' environment variable in docker-compose etc.
- Update docs on integrating with Ollama, openai proxies on first run
- Auto populate all chat models supported by openai compatible APIs
- Auto set vision enabled for all commercial models
- Minor
- Add huggingface cache to khoj_models volume. This is where chat
models and (now) sentence transformer models are stored by default
- Reduce verbosity of yarn install of web app. Otherwise hit docker
log size limit & stops showing remaining logs after web app install
- Suggest `ollama pull <model_name>` to start it in background
- Update to latest initialize with new claude 3.5 sonnet and haiku models
- Update to set vision enabled for google and anthropic models by
default. Previously we didn't support but we've supported this for a
month or two now
- Just load the raw csv from OpenAI bucket. Normalize it into FRAMES format
- Improve docstring for frames datasets as well
- Log the load dataset perf timer at info level
- Explictly adding a slash command is a higher priority intent than
research mode being enabled in the background. Respect that for a
more intuitive UX flow.
- Explicit slash commands do not currently work in research mode.
You've to turn research mode off to use other slash commands. This
is strange, unnecessary given intent priority is clear.
- JSON extract from LLMs is pretty decent now, so get the input tools and output modes all in one go. It'll help the model think through the full cycle of what it wants to do to handle the request holistically.
- Make slight improvements to tool selection indicators
Previously errors would get eaten up but the model wouldn't see
anything. And the model wouldn't be allowed re-run the same query-tool
combination in the next iteration.
This update should give it insight into why it didn't get a result. So
it can make an informed (hopefully better) decision on what to do next.
And re-run the previous query if appropriate.
Previously when call to online search API etc. failed, it'd error out
of response to query in research mode. Khoj should skip tool use that
iteration but continue to try respond.
Previously chatml messages were just strings, now they can be list of
strings or list of dicts as well.
- Use json seriallization to manage their variations and truncate them
before printing for context.
- Put logic in single function for use across chat models
- Default to evaluation decision of None when either agent or
evaluator llm fails. This fixes accuracy calculations on errors
- Fix showing color for decision True
- Enable arg flags to specify output results file paths
Previously chatml messages were just strings.
Since gemini, anthropic models always have messages as list of
strings, truncate those strings instead of the list of message content
Removing binary data and truncating large data in output files
generated by code runs should improve speed and cost of research mode
runs with large or binary output files.
Previously binary data in code results was passed around in iteration
context during research mode. This made the context inefficient
because models have limited efficiency and reasoning capabilities over
b64 encoded image (and other binary) data and would hit context limits
leading to unnecessary truncation of other useful context
Also remove image data when logging output of code execution
- Allow passing user files as input into code sandbox for analysis
- Update prompt to give more example of complex, multi-line code
- Simplify logic for model. Run one program at a time,
instead of allowing model to run multiple programs in parallel
- Show Code generated charts and docs in Reference pane of web app and make them downloaded
- Add a border below heading
- Show code snippet in pre block
- Overflow-x when reference side panel open to allow seeing whole text
via x-scroll
- Align header, body position of reference cards with each other
- Only show filename in doc reference cards at message bottom.
Show full file path in hover and reference side panel
- Improve rendering code reference with better icons, smaller text and
different line clamps for better visibility
- Show code output files as sub card of code card in reference section
- Allow downloading files generated by code instead of rendering it in
chat message directly
- Show executed code before online references in reference panel
- Fix to render code generated chart with images, excalidraw diagrams
- Fix to save code context to chat history in image, diagram output modes
- Fix bug in image markdown being wrapped twice in markdown syntax
- Render newline in code references shown on chat page of web app
Previously newlines weren't getting rendered. This made the code
executed by Khoj hard to read in references. This changes fixes that.
`dangerouslySetInnerHTML' usage is justified as rendered code
snippet is being sanitized by DOMPurify before rendering.
- Run one program at a time, instead of allowing model to pass
multiple programs to run in parallel to simplify logic for model
- Update prompt to give more example of complex, multi-line code
- Allow passing user files as input into code sandbox for analysis
- Log code execution timer at info level to evaluate execution latencies
in production
- Type the generated code for easier processing by caller functions
Support including file attachments in the chat message
Now that models have much larger context windows, we can reasonably include full texts of certain files in the messages. Do this when an explicit file filter is set in a conversation. Do so in a separate user message in order to mitigate any confusion in the operation.
Pipe the relevant attached_files context through all methods calling into models.
This breaks certain prior behaviors. We will no longer automatically be processing/generating embeddings on the backend and adding documents to the "brain". You'll have to go to settings and go through the upload documents flow there in order to add docs to the brain (i.e., have search include them during question / response).
This will ensure only unique online references are shown in all
clients.
The duplication issue was exacerbated in research mode as even with
different online search queries, you can get previously seen results.
This change does a global deduplication across all online results seen
across research iterations before returning them in client reponse.
- Deduplicate online, doc search queries across research iterations.
This avoids running previously run online, doc searches again and
dedupes online, doc context seen by model to generate response.
- Deduplicate online search queries generated by chat model for each
user query.
- Do not pass online, docs, code context separately when generate
response in research mode. These are already collected in the meta
research passed with the user query
- Improve formatting of context passed to generate research response
- Use xml tags to delimit context. Pass per iteration queries in each
iteration result
- Put user query before meta research results in user message passed
for generating response
This deduplications will improve speed, cost & quality of research mode
Previously the whole research mode response would fail if the pick
next tool call to chat model failed. Now instead of it completely
failing, the researcher actor is told to try again in next iteration.
This allows for a more graceful degradation in answering a research
question even if a (few?) calls to the chat model fail.
Jina search API returns content of all webpages in search results.
Previously code wouldn't remove content beyond max_webpages_to_read
limit set. Now, webpage content in organic results aree explicitly
removed beyond the requested max_webpage_to_read limit.
This should align behavior of online results from Jina with other
online search providers. And restrict llm context to a reasonable size
when using Jina for online search.
This fixes chat with old chat sessions. Fixes issue with old Whatsapp
users can't chat with Khoj because chat history doc context was
stored as a list earlier
Command rate limit wouldn't be shown to user as server wouldn't be
able to handle HTTP exception in the middle of streaming.
Catch exception and render it as LLM response message instead for
visibility into command rate limiting to user on client
Log rate limmit messages for all rate limit events on server as info
messages
Convert exception messages into first person responses by Khoj to
prevent breaking the third wall and provide more details on wht
happened and possible ways to resolve them.
Previously the batch start index wasn't being passed so all batches
started in parallel were showing the same processing example index
This change doesn't impact the evaluation itself, just the index shown
of the example currently being evaluated
- Document is first converted in the chatinputarea, then sent to the chat component. From there, it's sent in the chat API body and then processed by the backend
- We couldn't directly use a UploadFile type in the backend API because we'd have to convert the api type to a multipart form. This would require other client side migrations without uniform benefit, which is why we do it in this two-phase process. This also gives us capacity to repurpose the moe generic interface down the road.
- Why
We need better, automated evals to measure performance shifts of Khoj
across prompt, model and capability changes.
Google's FRAMES benchmark evaluates multi-step retrieval and reasoning
capabilities of AI agents. It's a good starter benchmark to evaluate Khoj.
- Details
This PR adds an eval script to evaluate Khoj responses on the the FRAMES
benchmark prompts against the ground truth provided by it.
Script allows configuring sample size, batch size, sampling queries from the
eval dataset.
Gemini is used as an LLM Judge to auto grade Khoj responses vs ground truth
data from the benchmark.
This was previously required, but now it's only usefuly for more
advanced settings, not typical for self-hosting users.
With recent updates, the user's selected chat model is used for both
Khoj's train of thought and response. This makes it easy to
switch your preferred chat model directly from the user settings
page and not have to update this in the admin panel as well.
Reflect these code changse in the docs, by removing the unnecessary
step for self-hosted users to create a server chat setting when using
an OpenAI proxy service like Ollama, LiteLLM etc.
Now that models have much larger context windows, we can reasonably include full texts of certain files in the messages. Do this when an explicit file filter is set in a conversation. Do so in a separate user message in order to mitigate any confusion in the operation.
Pipe the relevant attached_files context through all methods calling into models.
We'll want to limit the file sizes for which this is used and provide more helpful UI indicators that this sort of behavior is taking place.
- The server has moved to a model of standardization for the embeddings generation workflow. Remove references to the support for differentiated models.
- The migration script fo ra new model needs to be updated to accommodate full regeneration.
Google's FRAMES benchmark evaluates multi-step retrieval and reasoning
capabilities of an agent.
The script uses Gemini as an LLM Judge to evaluate Khoj responses to
the FRAMES benchmark prompts against the ground truth provided by it.
- Improve chat actors and their prompts for research mode.
- Add documentation to enable the code tool when self-hosting Khoj
- Edit Chat Messages
- Store Turn Id in each chat message.
- Expose API to delete chat message.
- Expose delete chat message button to turn delete chat message from web app
- Set LLM Generation Seed for Reproducible Debugging and Testing
- Setting seed for LLM generation is supported by Llama.cpp and OpenAI models.
This can (somewhat) restrain LLM output
- Getting fixed responses for fixed inputs helps test, debug longer reasoning chains like used in advanced reasoning
## Overview
Khoj can now go into research mode and use a python code interpreter. These are experimental features that are being released early for feedback and testing.
- Research mode allows Khoj to dynamically select the tools it needs to best answer the question. It is also allowed more iterations to get to a satisfactory answer. Its more dynamic train of thought is shown to improve visibility into its thinking.
- Adding ability for Khoj to use a python code interpreter is an adjacent capability. It can help Khoj do some data analysis and generate charts for you. A sandboxed python to run code is provided using [cohere-terrarium](https://github.com/cohere-ai/cohere-terrarium?tab=readme-ov-file), [pyodide](https://pyodide.org/).
## Analysis
Research mode (significantly?) improves Khoj's information retrieval for more complex queries requiring multi-step lookups but takes longer to run. It can research for longer, requiring less back-n-forth with the user to find an answer.
Research mode gives most gains when used with more advanced chat models (like o1, 4o, new claude sonnet and gemini-pro-002). Smaller models improve their response quality but tend to get into repetitive loops more often.
## Next Steps
- Get community feedback on research mode. What works, what fails, what is confusing, what'd be cool to have.
- Tune Khoj's capabilities for longer autonomous runs and to generalize across a larger range of model sizes
## Miscellaneous Improvements
- Khoj's train of thought is saved and shown for all messages, not just the latest one
- Render charts generated by Khoj and code running using the code tool on the web app
- Align chat input color to currently selected agent color
- Dedent code for readability
- Use better name for in research mode check
- Continue to remove inferred summarize command when multiple files in
file filter even when not in research mode
- Continue to show select information source train of thought.
It was removed by mistake earlier
## Overview
Use git to capture prompt traces of khoj's train of thought. View, analyze and debug them using your favorite git client (e.g vscode, magit).
- Each commit captures an interaction with an LLM
The commit writes the query, response and system message each to a separate file in the repo.
The commit message captures the chat model, Khoj version and other metadata
- Each conversation turn can have multiple interactions with an LLM (e.g Khoj's train of thought)
- Each new conversation turn forks from and merges back into its conversation branch
- Each new conversation branches from the user branch
- Each new user branches from root commit on the main branch
## Usage
1. Set `KHOJ_DEBUG=true` or start khoj in very verbose mode with `khoj -vv` to turn on prompt tracing
2. Chat with Khoj as usual
3. Open the promptrace git repo to view the generated prompt traces using your favorite git porcelain.
The Khoj prompt trace git repo is created at `/tmp/khoj_promptrace` by default. You can configure the prompt trace directory by setting the `PROMPTRACE_DIR`environment variable.
## Implementation
- Add utility functions to capture prompt traces using git (via `gitpython`)
- Make each model provider in Khoj commit their LLM interactions with promptrace
- Weave chat metadata from chat API through all chat actors and commit it to the prompt trace
- Match the online query generator prompt to match the formatting of
extract questions
- Separate iteration results by newline
- Improve webpage and online tool descriptions
- Allow server to start if loading embedding model fails with an error.
This allows fixing the embedding model config via admin panel.
Previously server failed to start if embedding model was configured
incorrectly. This prevented fixing the model config via admin panel.
- Convert boolean string in config json to actual booleans when passed
via admin panel as json before passing to model, query configs
- Only create default model if no search model configured by admin.
Return first created search model if its been configured by admin.
Models were getting a bit confused about who is search for who's
information. Using third person to explicitly call out on who's behalf
these searches are running seems to perform better across
models (gemini's, gpt etc.), even if the role of the message is user.
Use placeholder for newline in json object values until json parsed
and values extracted. This is useful when research mode models outputs
multi-line codeblocks in queries etc.
Anthropic API doesn't have ability to enforce response with valid json
object, unlike all the other model types.
While the model will usually adhere to json output instructions.
This step is meant to more strongly encourage it to just output json
object when response_type of json_object is requested.
Separate conversation history with user from the conversation history
between the tool AIs and the researcher AI.
Tools AIs don't need top level conversation history, that context is
meant for the researcher AI.
The invoked tool AIs need previous attempts at using the tool in this
research runs iteration history to better tune their next run.
Or at least that is the hypothesis to break the models looping.
Models weren't generating a diverse enough set of questions. They'd do
minor variations on the original query. What is required is asking
queries from a bunch of different lenses to retrieve the requisite
information.
This prompt updates shows the AIs the breadth of questions to by
example and instruction. Seem like performance improved based on vibes
- Improve mobile friendliness with new research mode toggle, since chat input area is now taking up more space
- Remove clunky title from the suggestion card
- Fix fk lookup error for agent.creator
Overview
---
- Put context into separate user message before sending to chat model.
This should improve model response quality and truncation logic in code
- Pass online context from chat history to chat model for response.
This should improve response speed when previous online context can be reused
- Improve format of notes, online context passed to chat models in prompt.
This should improve model response quality
Details
---
The document, online search context are now passed as separate user
messages to chat model, instead of being added to the final user message.
This will improve
- Models ability to differentiate data from user query.
That should improve response quality and reduce prompt injection
probability
- Make truncation logic simpler and more robust
When context window hit, can simply pop messages to auto truncate
context in order of context, user, assistant message for each
conversation turn in history until reach current user query
The complex, brittle logic to extract user query from context in
last user message isn't required.
Align context passed to offline chat model with other chat models
- Pass context in separate message for better separation between user
query and the shared context
- Pass filename in context
- Add online results for webpage conversation command
Context role was added to allow change message truncation order based
on context role as well.
Revert it for now since currently this is not currently being done.
Previously model would rarely read webpages after webpage search. Need
the model to webpages more regularly for deeper research and to stop
getting stuck in repetitive online search loops
Previous passing of online results as json dump in prompts was less
readable for humans, and I'm guessing less readable for
models (trained on human data) as well?
- Start from this branches src/khoj/routers/api_chat.py
Add tracer to all old and new chat actors that don't have it set
when they are called.
- Update the new chat actors like apick next tool etc to use tracer too
- Conflicts:
Combine both sides of the conflict in all 3 files below
- src/khoj/processor/conversation/utils.py
- src/khoj/routers/helpers.py
- src/khoj/utils/helpers.py
- Message train of thought forks and merges from its conversation branch
- Conversation branches from user branch
- User branches from root commit on the main branch
- Weave chat tracer metadata from api endpoint through all chat actors
and commit it to the prompt trace
### Major
- Give Vision to Anthropic models in Khoj
### Minor
- Reuse logic to format messages for chat with anthropic models
- Make the get image from url function more versatile and reusable
- Encourage output mode chat actor to output only json and nothing else
- Use a single standard search model across the server. There's diminishing benefits for having multiple user-customizable search models.
- We may want to add server-level customization for specific tasks
- Store the search model used to generate a given entry on the `Entry` object
- Remove user-facing APIs and view
- Add a management command for migrating the default search model on the server
In a future PR (after running the migration), we'll also remove the `UserSearchModelConfig`
Latest claude model wanted to say more than just give the json output.
The updated prompt encourages the model to ouput just json. This is
similar to what is already being done for other prompts
It was previously added under the google utils. Now it can be used by
other conversation processors as well.
The updated function
- can get both base64 encoded and PIL formatted images from url
- will return the media type of the image as well in response
* Create explicit flow to enable the free trial
The current design is confusing. It obfuscates the fact that the user is on a free trial. This design will make the opt-in explicit and more intuitive.
* Use the Subscription Type enum instead of hardcoded strings everywhere
* Use length of free trial in the frontend code as well
Had temporarily updated the default selected agent to last used.
Revert for now as
1. The previous logic was buggy. It didn't select the default agent
even when the last used agent was the default agent. Which would
require more work.
2. It maybe too early anyway to set the default agent to last used.
Adding div elements to message to render degraded text copied to
clipboard for messages with user uploaded images.
This change fixes that by separating message to render from message
for clipboard. It ensures differently formatted forms of the user
images are added to the two to allow proper rendering while still
having decently formatted text copied to clipboard
Add newline instead of sending message when hit Enter key on mobile
displays. As on phones shift key doesn't exist and send button is easily
clickable.
Limit hitting Enter key to send message to computers = larger display
= expected to have full fledged keyboards.
## Overview
Allow quickly selecting, switching agents from agents pane on home page of web app
## Details
- Show all agents in carousel on home screen agent pane of web app
- Smart Sort
1. Pin default agent as first for ease of access
2. Show used agents by MRU for ease of access
3. Shuffle unused agents for discoverability
- Select most recently used agent to chat with by default
- Push smart sort logic down to API
- Common logic can be reused across clients
- Agent sort was previously done in web app
- Focus on chat input on agent select
- Double click agent on home page to open edit agent card on agents page
## Overview
- Add vision support for Gemini models in Khoj
- Allow sharing multiple images as part of user query from the web app
- Handle multiple images shared in query to chat API
- Remove border from agent detail hover card on home page
- Do not wrap long agent names in agent pills on home page
- Handle scenario where chatInputRef is null
Add support for generating dynamic diagrams in flow with Excalidraw (https://github.com/excalidraw/excalidraw). This happens in three steps:
1. Default information collection & intent determination step.
2. Improving the overall guidance of the prompt for generating a JSON, Excalidraw-compatible declaration.
3. Generation of the diagram to output to the final UI.
Add support in the web UI.
Previously only notes context from chat history was included.
This change includes online context from chat history for model to use
for response generation.
This can reduce need for online lookups by reusing previous online
context for faster responses. But will increase overall response time
when not reusing past online context, as faster context buildup per
conversation.
Unsure if inclusion of context is preferrable. If not, both notes and
online context should be removed.
The document, online search context are now passed as separate user
messages to chat model, instead of being added to the final user message.
This will improve
- Models ability to differentiate data from user query.
That should improve response quality and reduce prompt injection
probability
- Make truncation logic simpler and more robust
When context window hit, can simply pop messages to auto truncate
context in order of context, user, assistant message for each
conversation turn in history until reach current user query
The complex, brittle logic to extract user query from context in
last user message isn't required.
Marking the context message with assistant role doesn't translate well
across chat models. E.g
- Gemini can't handle consecutive messages by role = model well
- Claude will merge consecutive messages by same role. In current
message ordering the context message will result get merged into the
previous assistant response. And if move context message after user
query. The truncation logic will have to hop and skip while doing
deletions
- GPT seems to handle consecutive roles of any type fine
Using context role = user generalizes better across chat models for
now and aligns with previous behavior.
Improve separation of note snippets and show its origin file in notes
prompt to have more readable, contextualized text shared with model.
Previously the references dict was being directly passed as a string.
The documents don't look well formatted and are less intelligible.
- Passing file path along with notes snippets will help contextualize
the notes better.
- Better formatting should help with making notes more readable by the
chat model.
- Double click on agent to open edit agent card
- Focus on chat input pane when agent selected/clicked
for quick, smooth agent switch and message flow
- Hover on agent to see agent detail card on non-mobile displays
- Use debounce to only show when hover on card for a bit
- Default to None for the input_tools and output_modes so that they can be managed in the admin panel
- Hold off on showing off all Public Agents until we have a better experience for user profiles etc.
Have get agents API return agents ordered intelligently
- Put the default agent first
- Sort used agents by most recently chatted with agent for ease of access
- Randomly shuffle the remaining unused agents for discoverability
This change wraps the agent pane in a scroll area with all agents shown.
It allows selecting an agent to chat with directly from the home
screen without breaking flow and having to jump to the agents page.
The previous flow was not convenient to quickly and consistently start
chat with one of your standard agents.
This was because a random subet of agents were shown on the home page.
To start chat with an agent not shown on home screen load you had to
open the agents page and initiate the conversation from there.
One limitation of this methodology is that localStorage has a limit in how much data it can take. Should add more graceful error handling here as well.
Currently experiencing difficulty instruction following when an image is shared. It's more likely to try and output an image. Update to make a clearer distinction.
- Put the attached images display div inside the same parent div as
the text area
- Keep the attachment, microphone/send message buttons aligned with
the text area. So the attached images just show up at the top of the
text area but everything else stays at the same horizontal height as
before.
- This improves the UX by
- Ensuring that the attached images do not obscure the agents pane
above the chat input area
- The attached images visually look like they are inside the actual
input area, rather than floating above it. So the visual aligns
with the semantics
Previously the web app only expected a single image to be shared by
the user as part of their query.
This change allows sharing multiple images from the web app.
Closes#921
Previously Khoj could respond to a single shared image at a time.
This changes updates the chat API to accept multiple images shared by
the user and send it to the appropriate chat actors including the
openai response generation chat actor for getting an image aware
response
Khoj shouldn't refuse to respond to user if web lookups fail.
It should transparently mention that online search etc. failed.
But try respond as best as it can without those references
This change ensures a response to the users query is attempted even
when web info retrieval fails.
The huggingface endpoint can be flaky. Khoj shouldn't refuse to
respond to user if document search fails.
It should transparently mention that document lookup failed.
But try respond as best as it can without the document references
This changes provides graceful failover when inference endpoint
requests fail either when encoding query or reranking retrieved docs
- Advanced chat model should also fallback to user chat model if set
- Get conversation config should falback to user chat model if set
These assume no server chat model settings is configured
- Advanced chat model should also fallback to user chat model if set
- Get conversation config should falback to user chat model if set
These assume no server chat model settings is configured
Update regex to also include any links to code generated images that
aren't explicitly meant to be displayed inline. This allows folks to
download the image (unlike the fake link that doesn't work created by
model)
Previously Khoj would start answering the previous query. This maybe
because the prompt uses User for prompt in chat history but was using
Q for current user prompt.
Make webpages to read automatically on search_online configurable via
a argument.
Set it to default to 1, so other callers of the function
are unaffected.
But iterative chat director can still decide which, if
any, webpages to read based on the online search it performs
This change allows the iterative director to dive deeper into its
research as the data extracted contains relevant links from the webpage
Previous summarization prompt didn't extract relevant links from the
webpage which limited further explorations from webpages
Move construct_chat_history and ChatEvent enum into conversation.utils
and move send_message_to_model_wrapper to conversation.helper to
modularize code. And start thinning out the bloated routers.helper
- conversation.util components are shared functions that conversation
child packages can use.
- conversation.helper components can't be imported by conversation
packages but it can use these child packages
This division allows better modularity while avoiding circular
import dependencies
Create python code executing chat actor
- The chat actor generate python code within sandbox constraints
- Run the generated python code in the cohere terrarium, pyodide
based sandbox accessible at sandbox url
- Create a more dynamic reasoning agent that can evaluate information and understand what it doesn't know, making moves to get that information
- Lots of hacks and code that needs to be reversed later on before submission
2024-10-09 15:54:25 -07:00
280 changed files with 21370 additions and 13859 deletions
* Start any message with `/research` to try out the experimental research mode with Khoj.
* Anyone can now [create custom agents](https://blog.khoj.dev/posts/create-agents-on-khoj/) with tunable personality, tools and knowledge bases.
* [Read](https://blog.khoj.dev/posts/evaluate-khoj-quality/) about Khoj's excellent performance on modern retrieval and reasoning benchmarks.
***
## Overview
[Khoj](https://khoj.dev) is a personal AI app to extend your capabilities. It smoothly scales up from an on-device personal AI to a cloud-scale enterprise AI.
- Chat with any local or online LLM (e.g llama3, qwen, gemma, mistral, gpt, claude, gemini).
# Use the following line to use the latest version of khoj. Otherwise, it will build from source.
# Use the following line to use the latest version of khoj. Otherwise, it will build from source. Set this to ghcr.io/khoj-ai/khoj-cloud:latest if you want to use the prod image.
image:ghcr.io/khoj-ai/khoj:latest
# Uncomment the following line to build from source. This will take a few minutes. Comment the next two lines out if you want to use the offiicial image.
# Uncomment the following line to build from source. This will take a few minutes. Comment the next two lines out if you want to use the official image.
> Describes the Khoj settings configurable via the admin panel
By default, you admin panel is available at `http://localhost:42110/server/admin/`. You can access the admin panel by logging in with your admin credentials (this would be your `KHOJ_ADMIN_EMAIL` and `KHOJ_ADMIN_PASSWORD`). The admin panel allows you to configure various settings for your Khoj server.
## App Settings
### Agents
Add all the agents you want to use for your different use-cases like Writer, Researcher, Therapist etc.
@@ -36,14 +38,13 @@ To add a server chat setting:
- The `Advanced` field doesn't need to be set when self-hosting. When unset, the `Default` chat model is used for all users and the intermediate steps.
### OpenAI Processor Conversation Configs
These settings configure chat model providers to be accessed over API.
The name of this setting is kind of a misnomer, we know, it'll hopefully be changed at some point.
For each chat model provider you [add](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add):
### AI Model API
These settings configure APIs to interact with AI models.
For each AI Model API you [add](http://localhost:42110/server/admin/database/aimodelapi/add):
-`Api key`: Set to your [OpenAI](https://platform.openai.com/api-keys), [Anthropic](https://console.anthropic.com/account/keys) or [Gemini](https://aistudio.google.com/app/apikey) API keys.
-`Name`: Give the configuration any friendly name like `OpenAI`, `Gemini`, `Anthropic`.
-`Api base url`: Set the API base URL. This is only relevant to set if you're using another OpenAI-compatible proxy server like [Ollama](/advanced/ollama) or [LMStudio](/advanced/lmstudio).


### Search Model Configs
Search models are used to generate vector embeddings of your documents for natural language search and chat. You can choose any [embeddings models on HuggingFace](https://huggingface.co/models?pipeline_tag=sentence-similarity) to try, use for your to create vector embeddings of your documents for natural language search and chat.
This is only helpful for self-hosted users or teams. If you're using [Khoj Cloud](https://app.khoj.dev), both Magic Links and Google OAuth work.
:::
By default, most of the instructions for self-hosting Khoj assume a single user, and so the default configuration is to run in anonymous mode. However, if you want to enable authentication, you can do so either with with [Magic Links](#using-magic-links) or [Google OAuth](#using-google-oauth) as shown below. This can be helpful to make Khoj securely accessible to you and your team.
:::tip[Note]
Remove the `--anonymous-mode` flag from your khoj start up command or docker-compose file to enable authentication.
:::
## Using Magic Links
The most secure way to do this is to integrate with [Resend](https://resend.com) by setting up an account and adding an environment variable for `RESEND_API_KEY`. You can get your API key [here](https://resend.com/api-keys). This will allow you to automatically send sign-in links to users who want to log in.
It's still possible to use the magic links feature without Resend, but you'll need to manually send the magic links to users who want to log in.
## Manually sending magic links
1. The user will have to enter their email address in the login form.
They'll click `Send Magic Link`. Without the Resend API key, this will just create an unverified account for them in the backend
<imgsrc="/img/magic_link.png"alt="Magic link login form"width="400"/>
2. You can get their magic link using the admin panel
Go to the [admin panel](http://localhost:42110/server/admin/database/khojuser/). You'll see a list of users. Search for the user you want to send a magic link to. Tick the checkbox next to their row, and use the action drop down at the top to 'Get email login URL'. This will generate a magic link that you can send to the user, which will appear at the top of the admin interface.
| Get email login URL | Retrieved login URL |
|---------------------|---------------------|
| <imgsrc="/img/admin_get_emali_login.png"alt="Get user magic sign in link"width="400"/>| <imgsrc="/img/admin_successful_login_url.png"alt="Successfully retrieved a login URL"width="400"/>|
3. Send the magic link to the user. They can click on it to log in.
Once they click on the link, they'll automatically be logged in. They'll have to repeat this process for every new device they want to log in from, but they shouldn't have to repeat it on the same device.
A given magic link can only be used once. If the user tries to use it again, they'll be redirected to the login page to get a new magic link.
## Using Google OAuth
To set up your self-hosted Khoj with Google Auth, you need to create a project in the Google Cloud Console and enable the Google Auth API.
To implement this, you'll need to:
1. You must use the `python` package or build from source, because you'll need to install additional packages for the google auth libraries (`prod`). The syntax to install the right packages is
```
pip install khoj[prod]
```
2. [Create authorization credentials](https://developers.google.com/identity/sign-in/web/sign-in) for your application.
3. Open your [Google cloud console](https://console.developers.google.com/apis/credentials) and create a configuration like below for the relevant `OAuth 2.0 Client IDs` project:
4. Configure these environment variables: `GOOGLE_CLIENT_SECRET`, and `GOOGLE_CLIENT_ID`. You can find these values in the Google cloud console, in the same place where you configured the authorized origins and redirect URIs.
That's it! That should be all you have to do. Now, when you reload Khoj without `--anonymous-mode`, you should be able to use your Google account to sign in.
By default, most of the instructions for self-hosting Khoj assume a single user, and so the default configuration is to run in anonymous mode. However, if you want to enable authentication, you can do so either with with [Magic Links](#using-magic-links) or [Google OAuth](#using-google-oauth) as shown below. This can be helpful to make Khoj securely accessible to you and your team.
:::tip[Note]
Remove the `--anonymous-mode` flag from your khoj start up command or docker-compose file to enable authentication.
:::
## Using Magic Links
The most secure way to do this is to integrate with [Resend](https://resend.com).
1. Setup your account at https://resend.com
2. Set an environment variable for `RESEND_API_KEY`. You can get your API key [here](https://resend.com/api-keys).
3. Set an environment variable for `RESEND_EMAIL`. This is the email address that will show up in your `from` field when sending magic links.
This will allow you to automatically send sign-in links to users who want to log in.
It's still possible to use the magic links feature without Resend, but you'll need to manually send the magic links to users who want to log in.
## Manually sending magic links
1. The user will have to enter their email address in the login page at http://localhost:42110/login.
They'll click `Get Login Link`. Without the Resend API key, this will just create an unverified account for them in the backend
<img src="/img/magic_link.png" alt="Magic link login form" width="400"/>
2. You can get their magic link using the admin panel
Go to the [admin panel](http://localhost:42110/server/admin/database/khojuser/). You'll see a list of users. Search for the user you want to send a magic link to. Tick the checkbox next to their row, and use the action drop down at the top to 'Get email login URL'. This will generate a magic link that you can send to the user, which will appear at the top of the admin interface.
| Get email login URL | Retrieved login URL |
|---------------------|---------------------|
| <img src="/img/admin_get_emali_login.png" alt="Get user magic sign in link" width="400" />| <img src="/img/admin_successful_login_url.png" alt="Successfully retrieved a login URL" width="400" />|
3. Send the magic link to the user. They can click on it to log in.
Once they click on the link, they'll automatically be logged in. They'll have to repeat this process for every new device they want to log in from, but they shouldn't have to repeat it on the same device.
A given magic link can only be used once. If the user tries to use it again, they'll be redirected to the login page to get a new magic link.
## Using Google OAuth
For this method, you'll need to use the prod version of the Khoj package. You can install it as below:
<Tabs groupId="server" queryString>
<TabItem value="docker" label="Docker">
Update your `docker-compose.yml` to use the prod image
```bash
image: ghcr.io/khoj-ai/khoj-cloud:latest
```
</TabItem>
<TabItem value="pip" label="Pip">
```bash
pip install khoj[prod]
```
</TabItem>
</Tabs>
To set up your self-hosted Khoj with Google Auth, you need to create a project in the Google Cloud Console and enable the Google Auth API.
To implement this, you'll need to:
1. [Create authorization credentials](https://developers.google.com/identity/sign-in/web/sign-in) for your application.
2. Open your [Google cloud console](https://console.developers.google.com/apis/credentials) and create a configuration like below for the relevant `OAuth 2.0 Client IDs` project:
3. Configure these environment variables: `GOOGLE_CLIENT_SECRET`, and `GOOGLE_CLIENT_ID`. You can find these values in the Google cloud console, in the same place where you configured the authorized origins and redirect URIs.
That's it! That should be all you have to do. Now, when you reload Khoj without `--anonymous-mode`, you should be able to use your Google account to sign in.
3. Create a new [OpenAI Processor Conversation Config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) on your Khoj admin panel
3. Create a new [API Model API](http://localhost:42110/server/admin/database/aimodelapi/add) on your Khoj admin panel
- Name: `proxy-name`
- Api Key: `any string`
- Api Base Url: **URL of your Openai Proxy API**
4. Create a new [Chat Model Option](http://localhost:42110/server/admin/database/chatmodeloptions/add) on your Khoj admin panel.
4. Create a new [Chat Model](http://localhost:42110/server/admin/database/chatmodel/add) on your Khoj admin panel.
- Name: `llama3.1` (replace with the name of your local model)
This is only helpful for self-hosted users. If you're using [Khoj Cloud](https://app.khoj.dev), you're limited to our first-party models.
:::
:::info
Khoj natively supports local LLMs [available on HuggingFace in GGUF format](https://huggingface.co/models?library=gguf). Using an OpenAI API proxy with Khoj maybe useful for ease of setup, trying new models or using commercial LLMs via API.
:::
Ollama allows you to run [many popular open-source LLMs](https://ollama.com/library) locally from your terminal.
For folks comfortable with the terminal, Ollama's terminal based flows can ease setup and management of chat models.
Ollama exposes a local [OpenAI API compatible server](https://github.com/ollama/ollama/blob/main/docs/openai.md#models). This makes it possible to use chat models from Ollama to create your personal AI agents with Khoj.
## Setup
1. Setup Ollama: https://ollama.com/
2. Start your preferred model with Ollama. For example,
```bash
ollama run llama3.1
```
3. Create a new [OpenAI Processor Conversation Config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) on your Khoj admin panel
- Name: `ollama`
- Api Key: `any string`
- Api Base Url: `http://localhost:11434/v1/` (default for Ollama)
4. Create a new [Chat Model Option](http://localhost:42110/server/admin/database/chatmodeloptions/add) on your Khoj admin panel.
- Name: `llama3.1` (replace with the name of your local model)
6. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown.
That's it! You should now be able to chat with your Ollama model from Khoj. If you want to add additional models running on Ollama, repeat step 6 for each model.
This is only helpful for self-hosted users. If you're using [Khoj Cloud](https://app.khoj.dev), you can use our first-party supported models.
:::
:::info
Khoj can directly run local LLMs [available on HuggingFace in GGUF format](https://huggingface.co/models?library=gguf). The integration with Ollama is useful to run Khoj on Docker and have the chat models use your GPU or to try new models via CLI.
:::
Ollama allows you to run [many popular open-source LLMs](https://ollama.com/library) locally from your terminal.
For folks comfortable with the terminal, Ollama's terminal based flows can ease setup and management of chat models.
Ollama exposes a local [OpenAI API compatible server](https://github.com/ollama/ollama/blob/main/docs/openai.md#models). This makes it possible to use chat models from Ollama with Khoj.
## Setup
:::info
Restart your Khoj server after first run or update to the settings below to ensure all settings are applied correctly.
:::
<Tabs groupId="type" queryString>
<TabItem value="first-run" label="First Run">
<Tabs groupId="server" queryString>
<TabItem value="docker" label="Docker">
1. Setup Ollama: https://ollama.com/
2. Download your preferred chat model with Ollama. For example,
```bash
ollama pull llama3.1
```
3. Uncomment `OPENAI_API_BASE` environment variable in your downloaded Khoj [docker-compose.yml](https://github.com/khoj-ai/khoj/blob/master/docker-compose.yml#:~:text=OPENAI_API_BASE)
4. Start Khoj docker for the first time to automatically integrate and load models from the Ollama running on your host machine
```bash
# run below command in the directory where you downloaded the Khoj docker-compose.yml
docker-compose up
```
</TabItem>
<TabItem value="pip" label="Pip">
1. Setup Ollama: https://ollama.com/
2. Download your preferred chat model with Ollama. For example,
```bash
ollama pull llama3.1
```
3. Set `OPENAI_API_BASE` environment variable to `http://localhost:11434/v1/` in your shell before starting Khoj for the first time
This is only helpful for secure cross-device access to **self-hosted** Khoj. You **do not** need this if you're using [Khoj Cloud](https://app.khoj.dev).
:::
[Tailscale](https://tailscale.com) simplifies creating a private VPN using [Wireguard](https://www.wireguard.com/) and OAuth. So you can host and access services on your devices from anywhere.
The instructions below are one way to simply and securely access your self-hosted Khoj from your phone, laptop etc.
### Minimal Setup
1. Setup khoj on your preferred machine following the [standard steps](/get-started/setup)
2. Sign-up to [Tailscale](https://tailscale.com) and install the app on machines you want to access Khoj from. This usually includes your khoj server, your phone, laptop. Note the tailscale i.p of your khoj server.
3. Start khoj on your server by including the flag `--host <your_server_tailscale_ip>`
4. Open `http://<your_server_tailscale_ip>:42110` to access khoj from any device on your tailscale network!
### HTTPS Certificate
:::info
Tailscale uses Wireguard to encrypt and route traffic between your machines. So HTTPS isn't required with Tailscale for secure access. HTTPS with Tailscale is only useful for browsers to not complain about security and block certain features like clipboard access unless HTTPS is enabled.
:::
1. Enable [MagicDNS](https://tailscale.com/kb/1081/magicdns#enabling-magicdns) and [HTTPS](https://tailscale.com/kb/1153/enabling-https) toggle on your tailscale admin console [DNS](https://login.tailscale.com/admin/dns) page. Note your unique tailscale domain name (usually ends with .ts.net)
2. Create an https certificate for your Khoj server by running the following command:
```bash
# Assuming the server is named, `server` and your tailnet is `black-forest.ts.net`
# Note path of the .crt and .key files generated
tailscale cert server.black-forest.ts.net
```
3. Start khoj to be served via https on standard port
```bash
sudo KHOJ_DOMAIN=server.black-forest.ts.net \
khoj \
--sslcert /path/to/your/tailscale.crt \
--sslkey path/to/your/tailscale.key \
--host=server.black-forest.ts.net \
--port 443
```
4. You should now be able to access khoj on `https://server.black-forest.ts.net` from any device on your private tailscale network!
@@ -11,7 +11,7 @@ This is only helpful for self-hosted users. If you're using [Khoj Cloud](https:/
Khoj natively supports local LLMs [available on HuggingFace in GGUF format](https://huggingface.co/models?library=gguf). Using an OpenAI API proxy with Khoj maybe useful for ease of setup, trying new models or using commercial LLMs via API.
:::
Khoj can use any OpenAI API compatible server including [Ollama](/advanced/ollama), [LMStudio](/advanced/lmstudio) and [LiteLLM](/advanced/litellm).
Khoj can use any OpenAI API compatible server including local providers like [Ollama](/advanced/ollama), [LMStudio](/advanced/lmstudio) and [LiteLLM](/advanced/litellm) and commercial providers like [HuggingFace](https://huggingface.co/docs/api-inference/tasks/chat-completion#using-the-api), [OpenRouter](https://openrouter.ai/docs/quick-start) etc.
Configuring this allows you to use non-standard, open or commercial, local or hosted LLM models for Khoj
Combine them with Khoj can turn your favorite LLM into an AI agent. Allowing you to chat with your docs, find answers from the internet, build custom agents and run automations.
@@ -20,18 +20,15 @@ For specific integrations, see our [Ollama](/advanced/ollama), [LMStudio](/advan
## General Setup
1. Start your preferred OpenAI API compatible app
2. Create a new [OpenAI Processor Conversation Config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) on your Khoj admin panel
1. Start your preferred OpenAI API compatible app locally or get API keys from commercial AI model providers.
3. Create a new [API Model API](http://localhost:42110/server/admin/database/aimodelapi/add) on your Khoj admin panel
- Name: `any name`
- Api Key: `any string`
- Api Base Url: **URL of your Openai Proxy API**
3. Create a new [Chat Model Option](http://localhost:42110/server/admin/database/chatmodeloptions/add) on your Khoj admin panel.
3. Create a new [Chat Model](http://localhost:42110/server/admin/database/chatmodel/add) on your Khoj admin panel.
- Name: `llama3` (replace with the name of your local model)
- Model Type: `Openai`
- Openai Config: `<the proxy config you created in step 2>`
- Max prompt size: `2000` (replace with the max prompt size of your model)
- Tokenizer: *Do not set for OpenAI, mistral, llama3 based models*
4.Create a new [Server Chat Setting](http://localhost:42110/server/admin/database/serverchatsettings/add/) on your Khoj admin panel
- Default model: `<name of chat model option you created in step 3>`
- Summarizer model: `<name of chat model option you created in step 3>`
5. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown.
4.Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown.
@@ -12,7 +12,7 @@ Without any desktop clients, you can start chatting with Khoj on WhatsApp. Bear
In order to use Khoj on WhatsApp with your own data, you need to setup a Khoj Cloud account and connect your WhatsApp account to it. This is a one time setup and you can do it from the [Khoj Cloud config page](https://app.khoj.dev/settings).
If you hit usage limits for the WhatsApp bot, upgrade to [a paid plan](https://khoj.dev/pricing) on Khoj Cloud.
If you hit usage limits for the WhatsApp bot, upgrade to [a paid plan](https://khoj.dev/#pricing) on Khoj Cloud.
<imgsrc="https://khoj-web-bucket.s3.amazonaws.com/khojwhatsapp.png"alt="WhatsApp QR Code"width="300"height="300"/>
You can optionally use `yarn dev` to start a development server for the front-end which will be available at http://localhost:3000. This is especially useful if you're making changes to the front-end code, but not necessary for running Khoj. Note that streaming does not work on the dev server due to how it is handled with SSR in Next.js.
Always run `yarn export` to test your front-end changes on http://localhost:42110 before creating a PR.
@@ -16,4 +16,4 @@ Go to https://app.khoj.dev/settings to connect your Notion workspace(s) to Khoj.
4. In the first step, you generated an API key. Use the newly generated API Key in your Khoj settings, by default at [http://localhost:42110/settings#notion](http://localhost:42110/settings#notion). Click `Save`.
5. Click `Configure` in http://localhost:42110/settings to index your Notion workspace(s).
That's it! You should be ready to start searching and chatting. Make sure you've configured your [chat settings](/get-started/setup#2-configure).
That's it! You should be ready to start searching and chatting. Make sure you've configured your [chat settings](/get-started/setup#use-khoj).
@@ -22,7 +22,7 @@ See [the setup guide](/get-started/setup.mdx) to configure your chat models.
- **On Web**: Open [/chat](https://app.khoj.dev/chat) in your web browser
- **On Obsidian**: Search for *Khoj: Chat* in the [Command Palette](https://help.obsidian.md/Plugins/Command+palette)
- **On Emacs**: Run `M-x khoj <user-query>`
2. Enter your queries to chat with Khoj. Use [slash commands](#commands) and [query filters](/miscellaneous/advanced#query-filters) to change what Khoj uses to respond
2. Enter your queries to chat with Khoj. Use [slash commands](#commands) and [query filters](/miscellaneous/query-filters) to change what Khoj uses to respond
#### Details
@@ -30,7 +30,7 @@ See [the setup guide](/get-started/setup.mdx) to configure your chat models.
2. These notes, the last few messages and associated metadata is passed to the enabled chat model along with your query to generate a response
#### Conversation File Filters
You can use conversation file filters to limit the notes used in the chat response. To do so, use the left panel in the web UI. Alternatively, you can also use [query filters](/miscellaneous/advanced#query-filters) to limit the notes used in the chat response.
You can use conversation file filters to limit the notes used in the chat response. To do so, use the left panel in the web UI. Alternatively, you can also use [query filters](/miscellaneous/query-filters) to limit the notes used in the chat response.
Khoj can generate and run very simple Python code snippets as well. This is useful if you want to generate a plot, run a simple calculation, or do some basic data manipulation. LLMs by default aren't skilled at complex quantitative tasks. Code generation & execution can come in handy for such tasks.
Just use `/code` in your chat command.
### Setup (Self-Hosting)
Run [Cohere's Terrarium](https://github.com/cohere-ai/cohere-terrarium) on your machine to enable code generation and execution.
Check the [instructions](https://github.com/cohere-ai/cohere-terrarium?tab=readme-ov-file#development) for running from source.
For running with Docker, you can use our [docker-compose.yml](https://github.com/khoj-ai/khoj/blob/master/docker-compose.yml), or start it manually like this:
```bash
docker pull ghcr.io/khoj-ai/terrarium:latest
docker run -d -p 8080:8080 ghcr.io/khoj-ai/terrarium:latest
```
#### Verify
Verify that it's running, by evaluating a simple Python expression:
@@ -10,6 +10,17 @@ To generate images, you just need to provide a prompt to Khoj in which the image
## Setup (Self-Hosting)
Right now, we only support integration with OpenAI's DALL-E. You need to have an OpenAI API key to use this feature. Here's how you can set it up:
1. Setup your OpenAI API key. See instructions [here](/get-started/setup#2-configure)
2. Create a text to image config at http://localhost:42110/server/admin/database/texttoimagemodelconfig/. We recommend the value `dall-e-3`.
You have a couple of image generation options.
### Image Generation Models
We support most state of the art image generation models, including Ideogram, Flux, and Stable Diffusion. These will run using [Replicate](https://replicate.com). Here's how to set them up:
1. Get a Replicate API key [here](https://replicate.com/account/api-tokens).
2. Create a new [Text to Image Model](http://localhost:42110/server/admin/database/texttoimagemodelconfig/). Set the `type` to `Replicate`. Use any of the model names you see [on this list](https://replicate.com/pricing#image-models). We recommend the `model name``black-forest-labs/flux-1.1-pro` from [Replicate](https://replicate.com/black-forest-labs/flux-1.1-pro).
### OpenAI
1. Get [an OpenAI API key](https://platform.openai.com/settings/organization/api-keys).
2. Setup your OpenAI API key, if you haven't already. See instructions [here](/get-started/setup#add-chat-models)
3. Create a text to image config at http://localhost:42110/server/admin/database/texttoimagemodelconfig/. Use `model name``dall-e-3` to use openai for image generation. Make sure to set the `Ai model api` field to the the OpenAI AI model api you setup in step 2.
@@ -14,8 +14,18 @@ Try it out yourself! https://app.khoj.dev
## Self-Hosting
Online search works out of the box even when self-hosting. Khoj uses [JinaAI's reader API](https://jina.ai/reader/) to search online and read webpages by default. No API key setup is necessary.
### Search
To improve online search, set the `SERPER_DEV_API_KEY` environment variable to your [Serper.dev](https://serper.dev/) API key. These search results include additional context like answer box, knowledge graph etc.
Online search can work even with self-hosting! You have a few options:
For advanced webpage reading, set the `OLOSTEP_API_KEY` environment variable to your [Olostep](https://www.olostep.com/) API key. This has a higher success rate at reading webpages than the default webpage reader.
- If you're using Docker, online search should work out of the box with [searxng](https://github.com/searxng/searxng) using our standard `docker-compose.yml`.
- For a non-local, free solution, you can use [JinaAI's reader API](https://jina.ai/reader/) to search online and read webpages. You can get a free API key via https://jina.ai/reader. Set the `JINA_API_KEY` environment variable to your Jina AI reader API key to enable online search.
- To get production-grade, fast online search, set the `SERPER_DEV_API_KEY` environment variable to your [Serper.dev](https://serper.dev/) API key. These search results include additional context like answer box, knowledge graph etc.
### Webpage Reading
Out of the box, you **don't have to do anything to enable webpage reading**. Khoj will automatically read webpages by using the `requests` library. To get more distributed and scalable webpage reading, you can use the following options:
- If you're using Jina AI's reader API for search, it should work automatically for webpage reading as well.
- For scalable webpage scraping, you can use [Firecrawl](https://www.firecrawl.dev/). Create a new [Webscraper](http://localhost:42110/server/admin/database/webscraper/add/). Set your Firecrawl API key to the Api Key field, and set the type to Firecrawl.
- For advanced webpage reading, you can use [Olostep](https://www.olostep.com/). This has a higher success rate at reading webpages than the default webpage readers. Create a new [Webscraper](http://localhost:42110/server/admin/database/webscraper/add/). Set your Olostep API key to the Api Key field, and set the type to Olostep.
@@ -11,7 +11,7 @@ Take advantage of super fast search to find relevant notes and documents from yo
- **On Web**: Open https://app.khoj.dev/ in your web browser
- **On Obsidian**: Click the *Khoj search* icon 🔎 on the [Ribbon](https://help.obsidian.md/User+interface/Workspace/Ribbon) or Search for *Khoj: Search* in the [Command Palette](https://help.obsidian.md/Plugins/Command+palette)
- **On Emacs**: Run `M-x khoj <user-query>`
2. Query using natural language to find relevant entries from your knowledge base. Use [query filters](/miscellaneous/advanced#query-filters) to limit entries to search
2. Query using natural language to find relevant entries from your knowledge base. Use [query filters](/miscellaneous/query-filters) to limit entries to search
@@ -17,11 +17,11 @@ You can also click on the speaker icon next to any message to hear it out loud.
Voice chat will automatically be configured when you initialize the application. The default configuration will run locally. If you want to use the OpenAI whisper API for voice chat, you can set it up by following these steps:
1. Setup your OpenAI API key. See instructions [here](/get-started/setup#2-configure).
1. Setup your OpenAI API key. See instructions [here](/get-started/setup#add-chat-models).
2. Create a new configuration at http://localhost:42110/server/admin/database/speechtotextmodeloptions/. We recommend the value `whisper-1` and model type `Openai`.
If you want to use the Text to Speech feature, you can set it up by following these steps:
1. Setup your account on [ElevenLabs.io](https://elevenlabs.io/).
2. Configure your API key in your environment variables with the key `ELEVEN_LABS_API_KEY`.
2. (Optional) Create a new [Voice model option](http://localhost:42110/server/admin/database/voicemodeloption/) with a specific voice ID from whichever voice you want to use. You can explore the options [here](https://elevenlabs.io/app/voice-library).
3. (Optional) Create a new [Voice model option](http://localhost:42110/server/admin/database/voicemodeloption/) with a specific voice ID from whichever voice you want to use. You can explore the options [here](https://elevenlabs.io/app/voice-library).
@@ -19,7 +19,11 @@ These are the general setup instructions for self-hosted Khoj.
You can install the Khoj server using either [Docker](?server=docker) or [Pip](?server=pip).
:::info[Offline Model + GPU]
If you want to use the offline chat model and you have a GPU, you should use Installation Option 2 - local setup via the Python package directly. Our Docker image doesn't currently support running the offline chat model on GPU, making inference times really slow.
To use the offline chat model with your GPU, we recommend using the Docker setup with Ollama . You can also use the local Khoj setup via the Python package directly.
:::
:::info[First Run]
Restart your Khoj server after the first run to ensure all settings are applied correctly.
:::
<Tabs groupId="server" queryString>
@@ -27,7 +31,29 @@ If you want to use the offline chat model and you have a GPU, you should use Ins
- *Option 1*: Click here to install [Docker Desktop](https://docs.docker.com/desktop/install/mac-install/). Make sure you also install the [Docker Compose](https://docs.docker.com/desktop/install/mac-install/) tool.
- *Option 2*: Use [Homebrew](https://brew.sh/) to install Docker and Docker Compose.
```shell
brew install --cask docker
brew install docker-compose
```
<h3>Setup</h3>
1. Download the Khoj docker-compose.yml file [from Github](https://github.com/khoj-ai/khoj/blob/master/docker-compose.yml)
2. Configure the environment variables in the `docker-compose.yml`
- Set `KHOJ_ADMIN_PASSWORD`, `KHOJ_DJANGO_SECRET_KEY` (and optionally the `KHOJ_ADMIN_EMAIL`) to something secure. This allows you to customize Khoj later via the admin panel.
- Set `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, or `GEMINI_API_KEY` to your API key if you want to use OpenAI, Anthropic or Gemini commercial chat models respectively.
- Uncomment `OPENAI_API_BASE` to use [Ollama](/advanced/ollama?type=first-run&server=docker#setup) running on your host machine. Or set it to the URL of your OpenAI compatible API like vLLM or [LMStudio](/advanced/lmstudio).
3. Start Khoj by running the following command in the same directory as your docker-compose.yml file.
```shell
cd ~/.khoj
docker-compose up
```
</TabItem>
<TabItem value="windows" label="Windows">
<h3>Prerequisites</h3>
@@ -37,30 +63,49 @@ If you want to use the offline chat model and you have a GPU, you should use Ins
wsl --install
```
2. Install [Docker Desktop](https://docs.docker.com/desktop/install/windows-install/) with **[WSL2 backend](https://docs.docker.com/desktop/wsl/#turn-on-docker-desktop-wsl-2)** (default)
<h3>Setup</h3>
1. Download the Khoj docker-compose.yml file [from Github](https://github.com/khoj-ai/khoj/blob/master/docker-compose.yml)
```shell
# Windows users should use their WSL2 terminal to run these commands
2. Configure the environment variables in the `docker-compose.yml`
- Set `KHOJ_ADMIN_PASSWORD`, `KHOJ_DJANGO_SECRET_KEY` (and optionally the `KHOJ_ADMIN_EMAIL`) to something secure. This allows you to customize Khoj later via the admin panel.
- Set `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, or `GEMINI_API_KEY` to your API key if you want to use OpenAI, Anthropic or Gemini commercial chat models respectively.
- Uncomment `OPENAI_API_BASE` to use [Ollama](/advanced/ollama) running on your host machine. Or set it to the URL of your OpenAI compatible API like vLLM or [LMStudio](/advanced/lmstudio).
3. Start Khoj by running the following command in the same directory as your docker-compose.yml file.
```shell
# Windows users should use their WSL2 terminal to run these commands
2. Configure the environment variables in the `docker-compose.yml`
- Set `KHOJ_ADMIN_PASSWORD`, `KHOJ_DJANGO_SECRET_KEY` (and optionally the `KHOJ_ADMIN_EMAIL`) to something secure. This allows you to customize Khoj later via the admin panel.
- Set `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, or `GEMINI_API_KEY` to your API key if you want to use OpenAI, Anthropic or Gemini commercial chat models respectively.
- Uncomment `OPENAI_API_BASE` to use [Ollama](/advanced/ollama) running on your host machine. Or set it to the URL of your OpenAI compatible API like vLLM or [LMStudio](/advanced/lmstudio).
3. Start Khoj by running the following command in the same directory as your docker-compose.yml file.
```shell
cd ~/.khoj
docker-compose up
```
</TabItem>
</Tabs>
<h3>Setup</h3>
1. Download the Khoj docker-compose.yml file [from Github](https://github.com/khoj-ai/khoj/blob/master/docker-compose.yml)
```shell
# Windows users should use their WSL2 terminal to run these commands
2. Configure the environment variables in the docker-compose.yml
- Set `KHOJ_ADMIN_PASSWORD`, `KHOJ_DJANGO_SECRET_KEY` (and optionally the `KHOJ_ADMIN_EMAIL`) to something secure. This allows you to customize Khoj later via the admin panel.
- Set `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, or `GEMINI_API_KEY` to your API key if you want to use OpenAI, Anthropic or Gemini chat models respectively.
3. Start Khoj by running the following command in the same directory as your docker-compose.yml file.
```shell
# Windows users should use their WSL2 terminal to run these commands
cd ~/.khoj
docker-compose up
```
:::info[Remote Access]
By default Khoj is only accessible on the machine it is running. To access Khoj from a remote machine see [Remote Access Docs](/advanced/remote).
@@ -257,12 +302,12 @@ Setup which chat model you'd want to use. Khoj supports local and online chat mo
Using Ollama? See the [Ollama Integration](/advanced/ollama) section for more custom setup instructions.
:::
1. Create a new [OpenAI processor conversation config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) in the server admin settings. This is kind of a misnomer, we know.
1. Create a new [AI Model Api](http://localhost:42110/server/admin/database/aimodelapi/add) in the server admin settings.
- Add your [OpenAI API key](https://platform.openai.com/api-keys)
- Give the configuration a friendly name like `OpenAI`
- (Optional) Set the API base URL. It is only relevant if you're using another OpenAI-compatible proxy server like [Ollama](/advanced/ollama) or [LMStudio](/advanced/lmstudio).<br />

2. Create a new [chat model options](http://localhost:42110/server/admin/database/chatmodeloptions/add)

2. Create a new [chat model](http://localhost:42110/server/admin/database/chatmodel/add)
- Set the `chat-model` field to an [OpenAI chat model](https://platform.openai.com/docs/models). Example: `gpt-4o`.
- Make sure to set the `model-type` field to `OpenAI`.
- If your model supports vision, set the `vision enabled` field to `true`. This is currently only supported for OpenAI models with vision capabilities.
@@ -270,22 +315,22 @@ Using Ollama? See the [Ollama Integration](/advanced/ollama) section for more cu

</TabItem>
<TabItem value="anthropic" label="Anthropic">
1. Create a new [OpenAI processor conversation config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) in the server admin settings. This is kind of a misnomer, we know.
1. Create a new [AI Model API](http://localhost:42110/server/admin/database/aimodelapi/add) in the server admin settings.
- Add your [Anthropic API key](https://console.anthropic.com/account/keys)
- Give the configuration a friendly name like `Anthropic`. Do not configure the API base url.
2. Create a new [chat model options](http://localhost:42110/server/admin/database/chatmodeloptions/add)
2. Create a new [chat model](http://localhost:42110/server/admin/database/chatmodel/add)
- Set the `chat-model` field to an [Anthropic chat model](https://docs.anthropic.com/en/docs/about-claude/models#model-names). Example: `claude-3-5-sonnet-20240620`.
- Set the `model-type` field to `Anthropic`.
- Set the `Openai config` field to the OpenAI processor conversation config for Anthropic you created in step 1.
- Set the `ai model api` field to the Anthropic AI Model API you created in step 1.
</TabItem>
<TabItem value="gemini" label="Gemini">
1. Create a new [OpenAI processor conversation config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) in the server admin settings. This is kind of a misnomer, we know.
1. Create a new [AI Model API](http://localhost:42110/server/admin/database/aimodelapi/add) in the server admin settings.
- Add your [Gemini API key](https://aistudio.google.com/app/apikey)
- Give the configuration a friendly name like `Gemini`. Do not configure the API base url.
2. Create a new [chat model options](http://localhost:42110/server/admin/database/chatmodeloptions/add)
2. Create a new [chat model](http://localhost:42110/server/admin/database/chatmodel/add)
- Set the `chat-model` field to a [Google Gemini chat model](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models#gemini-models). Example: `gemini-1.5-flash`.
- Set the `model-type` field to `Gemini`.
- Set the `Openai config` field to the OpenAI processor conversation config for Gemini you created in step 1.
- Set the `ai model api` field to the Gemini AI Model API you created in step 1.
</TabItem>
<TabItem value="offline" label="Offline">
@@ -298,7 +343,7 @@ Offline chat stays completely private and can work without internet using any op
:::
1. Get the name of your preferred chat model from [HuggingFace](https://huggingface.co/models?pipeline_tag=text-generation&library=gguf). *Most GGUF format chat models are supported*.
2. Open the [create chat model page](http://localhost:42110/server/admin/database/chatmodeloptions/add/) on the admin panel
2. Open the [create chat model page](http://localhost:42110/server/admin/database/chatmodel/add/) on the admin panel
3. Set the `chat-model` field to the name of your preferred chat model
- Make sure the `model-type` is set to `Offline`
4. Set the newly added chat model as your preferred model in your [User chat settings](http://localhost:42110/settings) and [Server chat settings](http://localhost:42110/server/admin/database/serverchatsettings/).
@@ -14,13 +14,6 @@ We don't send any personal information or any information from/about your conten
## Disable Telemetry
If you're self-hosting Khoj, you can opt out of telemetry at any time. To do so,
1. Open `~/.khoj/khoj.yml`
2. Add the following configuration:
```
app:
should-log-telemetry: false
```
3. Save the file and restart Khoj
If you're self-hosting Khoj, you can opt out of telemetry at any time by setting the `KHOJ_TELEMETRY_DISABLE` environment variable to `True` via shell or `docker-compose.yml`
If you have any questions or concerns, please reach out to us on [Discord](https://discord.gg/BDgyabRM6e).
{"name":"Khoj","short_name":"Khoj","display":"standalone","start_url":"/","description":"The open, personal AI for your digital brain. You can ask Khoj to draft a message, paint your imagination, find information on the internet and even answer questions from your documents.","theme_color":"#ffffff","background_color":"#ffffff","icons":[{"src":"/static/assets/icons/khoj_lantern_128x128.png","sizes":"128x128","type":"image/png"},{"src":"/static/assets/icons/khoj_lantern_256x256.png","sizes":"256x256","type":"image/png"}],"screenshots":[{"src":"/static/assets/samples/phone-remember-plan-sample.png","sizes":"419x900","type":"image/png","form_factor":"narrow","label":"Remember and Plan"},{"src":"/static/assets/samples/phone-browse-draw-sample.png","sizes":"419x900","type":"image/png","form_factor":"narrow","label":"Browse and Draw"},{"src":"/static/assets/samples/desktop-remember-plan-sample.png","sizes":"1260x742","type":"image/png","form_factor":"wide","label":"Remember and Plan"},{"src":"/static/assets/samples/desktop-browse-draw-sample.png","sizes":"1260x742","type":"image/png","form_factor":"wide","label":"Browse and Draw"}]}
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.