Compare commits

...

1111 Commits

Author SHA1 Message Date
Debanjum
713ba06a8d Release Khoj version 1.38.0 2025-03-29 18:30:06 +05:30
Debanjum
e9132d4fee Support attaching programming language file types to web app for chat 2025-03-29 01:22:35 +05:30
Debanjum
bdb6e33108 Install pgserver only when `pip install khoj[local]' is enabled
This avoids installing pgserver on linux arm64 docker builds, which it
doesn't currently support and isn't required to support as Khoj docker
images can use standard postgres server made available via our
docker-compose.yml
2025-03-29 00:27:19 +05:30
Debanjum
5ee513707e Use embedded postgres db to simplify self-hosted setup (#1141)
Use pgserver python package as an embedded postgres db,
installed directly as a khoj python package dependency.

This significantly simplifies self-hosting with just a `pip install khoj'. 
No need to also install postgres separately.

Still use standard postgres server for multi-user, production use-cases.
2025-03-29 00:03:55 +05:30
Debanjum
56b63f95ea Suggest Google image gen model, new Anthropic chat models on first run
- Update default anthropic chat models to latest good models.

- Now that Google supports a good text to image model. Suggest adding
  that if Google AI API is setup on first run.
2025-03-28 23:07:17 +05:30
Ikko Eltociear Ashimine
1e34de69e9 Fix spelling in Automations Docs (#1140)
Recieve -> Receive
2025-03-28 23:07:06 +05:30
Debanjum
72986c905a Fix default agent creation to allow chat on first run
Previously agent slug was not considered on create even when passed
explicitly in agent creation step.

This made the default agent slug different until next run when it was
updated after creation. And didn't allow chat to work on first run

The fix to use the agent slug when explicitly passed allows users to
chat on first run.
2025-03-28 22:49:00 +05:30
Debanjum
03de2803f0 Fallback to default agent for chat when unset in get conversation API 2025-03-28 00:56:18 +05:30
Debanjum
a387f638cd Enforce json schema on more chat actors to improve schema compliance
Including infer webpage urls, gemini documents search, pick default
mode tools chat actors
2025-03-28 00:56:18 +05:30
Debanjum
ccd9de7792 Improve safety settings for Gemini chat models
- Align remaining harm categories to only refuse in high harm
  scenarios as well
- Handle response for new "negligible" harm probability as well
2025-03-28 00:56:18 +05:30
Debanjum
2ec5cf3ae7 Normalize type of chat messages arg sent to Anthropic completion funcs
Previously messages got Anthropic specific formatting done before
being passed to Anthropic (chat) completion functions.

Move the code to format messages of type list[ChatMessage] into Anthropic
specific format down to the Anthropic (chat) completion functions.

This allows the rest of the functionality like prompt tracing to work
with normalized list[ChatMesssage] type of chat messages across AI API
providers
2025-03-26 18:24:17 +05:30
Debanjum
4085c9b991 Fix infer webpage url step actor to request upto specified max urls
Previously we'd always request up to 3 webpage url via the prompt but
read only one of the requested webpage url.

This would degrade quality of research and default mode. As model may
request reading upto 3 webpage links but get only one of the requested
webpages read.

This change passes the number of webpages to read down to the AI model
dynamically via the updated prompt. So number of webpages requested to
be read should mostly be same as number of webpages actually read.

Note: For now, the max webpages to read is kept same as before at 1.
2025-03-26 18:24:17 +05:30
Debanjum
c337c53452 Fix to use agent chat model for research model planning
Previously the research mode planner ignored the current agent or
conversation specific chat model the user was chatting with. Only the
server chat settings, user default chat model, first created chat model
were considered to decide the planner chat model.

This change considers the agent chat model to be used for the planner
as well. The actual chat model picked is decided by the existing
prioritization of server > agent > user > first chat model.
2025-03-25 18:31:55 +05:30
Debanjum
df090e5226 Enable unsharing of a public conversation (#1135)
This change enables the creator of a shared conversation to stop sharing the conversation publicly.

### Details
1. Create an API endpoint to enable the owner of the shared conversation to unshare it
2. Unshare a public conversations from the title pane of the public conversation on the web app
2025-03-25 14:24:01 +05:30
Debanjum
9dfa7757c5 Unshare public conversations from the title pane on web app
Only show the unshare button on public conversations created by the
currently logged in user. Otherwise hide the button

Set conversation.isOwner = true only if currently logged in user
shared the current conversation.

This isOwner information is passed by the get shared conversation API
endpoint
2025-03-25 14:05:29 +05:30
Debanjum
d9c758bcd2 Create API endpoint to unshare a public conversation
Pass isOwner field from the get shared conversation API endpoint if
the currently authenticated user created the requested public
conversation
2025-03-25 14:05:29 +05:30
Debanjum
e3f6d241dd Normalize chat messages sent to gemini funcs to work with prompt tracer
Previously messages passed to gemini (chat) completion functions
got a little of Gemini specific formatting mixed in.

These functions expect a message of type list[ChatMessage] to work
with prompt tracer etc.

Move the code to format messages of type list[ChatMessage] into gemini
specific format down to the gemini (chat) completion functions.

This allows the rest of the functionality like prompt tracing to work
with normalize list[ChatMesssage] type of chat messages across
providers
2025-03-25 14:04:16 +05:30
Debanjum
7976aa30f8 Terminate research if query or tool is empty 2025-03-25 14:04:16 +05:30
Debanjum
39aa48738f Set effort for openai reasoning models to pick tool in research mode
This is analogous to how we enable extended thinking for claude models
in research mode.

Default to medium effort irrespective of deepthought for openai
reasoning models as high effort is currently flaky with regular
timeouts and low effort isn't great.
2025-03-25 14:04:16 +05:30
Debanjum
b4929905b2 Add costs of ai prompt cache read, write. Use for calls to Anthropic 2025-03-25 14:04:16 +05:30
Debanjum
d4b0ef5e93 Fix ability to disable code and internet providers in eval workflow
Sets env vars to empty if condition not met so:
- Terrarium (not e2b) used as code sandbox on release triggered eval
- Internet turned off for math500 eval
2025-03-25 14:04:16 +05:30
sabaimran
a8285deed7 Release Khoj version 1.37.2 2025-03-23 11:38:25 -07:00
sabaimran
b7ac8771de Update a few pieces of documentation around data sources. 2025-03-23 11:36:20 -07:00
sabaimran
12e7409da9 Release Khoj version 1.37.1 2025-03-23 11:10:34 -07:00
sabaimran
985f1672ed Remove eval lists from git tracking 2025-03-23 10:59:32 -07:00
Debanjum
d1df9586ca Standardize AI model response temperature across provider specific ranges
- Anthropic expects a 0-1 range. Gemini & OpenAI expect a 0-2 range
- Anneal temperature to explore reasoning trajectories but respond factually
- Default send_message_to_model and extract_question temps to the same
2025-03-23 18:09:22 +05:30
Debanjum
55ae0eda7a Upgrade package dependencies nextjs for web app and torch on server 2025-03-23 17:10:40 +05:30
Debanjum
8409e64ff0 Clean AI model API providers documentation 2025-03-23 16:26:34 +05:30
Debanjum
86a51d84ca Access Claude and Gemini via GCP Vertex AI (#1134)
Support accessing Claude and Gemini AI models via Vertex AI on Google Cloud. 

See the documentation at docs.khoj.dev for setup details
2025-03-23 16:26:02 +05:30
Debanjum
16ffebf765 Document how to configure using AI models via GCP Vertex AI 2025-03-23 16:12:46 +05:30
Debanjum
7153d27528 Cache Google AI API client for reuse 2025-03-23 16:12:46 +05:30
Debanjum
da33c7d83c Support access to Gemini models via GCP Vertex AI 2025-03-23 16:12:46 +05:30
Debanjum
603c4bf2df Support access to Anthropic models via GCP Vertex AI
Enable configuring a Khoj AI model API for Vertex AI using GCP credentials.

Specifically use the api key & api base url fields of the AI Model API
associated with the current chat model to extract gcp region, gcp
project id & credentials. This helps create a AnthropicVertex client.

The api key field should contain the GCP service account keyfile as a
base64 encoded string.

The api base url field should be of the form
`https://{MODEL_GCP_REGION}-aiplatform.googleapis.com/v1/projects/{YOUR_GCP_PROJECT_ID}`

Accepting GCP credentials via the AI model API makes it easy to use
across local and cloud environments. As it bypasses the need for a
separate service account key file on the Khoj server.
2025-03-23 16:12:46 +05:30
Debanjum
8bebcd5f81 Support longer API key field in DB to store GCP service account keyfile 2025-03-23 14:55:50 +05:30
Debanjum
f2b438145f Upgrade sentence-transformers. Avoid transformers v4.50.0 as problematic
- The 3.4.1 release of sentence tranformer fixes offline load latency
  of sentence transformer models (and Khoj) by avoiding call to HF

- The 4.50.0 release of transformers is resulting in
  jax error (unexpected keyword argument 'flatten_with_keys') on load.
2025-03-23 09:02:57 +05:30
Debanjum
510cbed61c Make google auth package dependency explicit to simplify code
Previously google auth library was explicitly installed only for the
cloud variant of Khoj to minimize packages installed for non
production use-cases.

But it was being implicitly installed as a dependency of an explicit
package in the default installation anyway.

Making the dependency on google auth package explicit simplifies
the conditional import of google auth in code while not incurring any
additional cost in terms of space or complexity.
2025-03-23 09:02:57 +05:30
Debanjum
5fff05add3 Set seed for Google Gemini models using KHOJ_LLM_SEED env variable
This env var was already being used to set seed for OpenAI and Offline
models
2025-03-22 08:59:31 +05:30
Debanjum
6cc5a10b09 Disable SimpleQA eval on release as saturated & low signal for usecase
Reaching >94% in research mode on SimpleQA. When answers can be
researched online, it becomes too easy. And the FRAMES eval does a
more thorough job of evaluating that use-case anyway.
2025-03-22 08:05:12 +05:30
Debanjum
45015dae27 Limit to json enforcement via json object with DeepInfra hosted models
DeepInfra based models do not seem to support json schema. See
https://deepinfra.com/docs/advanced/json_mode for reference
2025-03-22 08:04:09 +05:30
Debanjum
dc473015fe Set default model, sandbox to display in eval workflow summary on release 2025-03-20 14:44:56 +05:30
Debanjum
80d864ada7 Release Khoj version 1.37.0 2025-03-20 14:06:57 +05:30
Debanjum
0c53106b30 Fix passing inline images to vision models
- Fix regression: Inline images were not getting passed to the AI
  models since #992
- Format inline images passed to Gemini models correctly
- Format inline images passed to Anthropic models correctly

Verified vision working with inline and url images for OpenAI,
Anthropic and Gemini models.

Resolves #1112
2025-03-20 13:22:46 +05:30
Debanjum
1ce1d2f5ab Deduplicate, clean code for S3 images uploads 2025-03-20 12:30:07 +05:30
Debanjum
f15a95dccf Show Khoj agent in agent dropdown by default on mobile in web app home
Previously on slow connection you'd see the agent dropdown flicker
from undefined to Khoj default agent on phones and other thin screens.

This is unnecessary and jarring. Populate with default agent to remove
this issue
2025-03-20 12:27:52 +05:30
Debanjum
9a0b126f12 Allow chat input on web app while Khoj responds to speed interactions
Previously the chat input area didn't allow inputting text while Khoj is
researching and generating response.

This change allows the user to add their next text while Khoj
responds. This should speed up interaction cycles as user can have
their next query ready to send when Khoj finishes its response.
2025-03-19 23:08:22 +05:30
Debanjum
e68428dd24 Support enforcing json schema in supported AI model APIs (#1133)
- Trigger
  Gemini 2.0 Flash doesn't always follow JSON schema in research prompt

- Details
  - Use json schema to enforce generate online queries format
  - Use json schema to enforce research mode tool pick format
  - Support constraining Gemini model output to specified response schema
  - Support constraining OpenAI model output to specified response schema
  - Only enforce json output in supported AI model APIs
  - Simplify OpenAI reasoning model specific arguments to OpenAI API
2025-03-19 22:59:23 +05:30
Debanjum
a5627ef787 Use json schema to enforce generate online queries format 2025-03-19 22:32:53 +05:30
Debanjum
2c53eb9de1 Use json schema to enforce research mode tool pick format 2025-03-19 22:32:53 +05:30
Debanjum
6980014838 Support constraining Gemini model output to specified response schema
If the response_schema argument is passed to
send_message_to_model_wrapper it is used to constrain output by Gemini
models
2025-03-19 22:32:53 +05:30
Debanjum
ac4b36b9fd Support constraining OpenAI model output to specified response schema 2025-03-19 22:32:52 +05:30
Debanjum
4a4d225455 Only enforce json output in supported AI model APIs
Deepseek reasoner does not support json object or schema via deepseek API
Azure Ai API does not support json schema

Resolves #1126
2025-03-19 22:32:11 +05:30
Debanjum
d74c3a1db4 Simplify OpenAI reasoning model specific arguments to OpenAI API
Previously OpenAI reasoning models didn't support stream_options and
response_format

Add reasoning_effort arg for calls to OpenAI reasoning models via API.
Right now it defaults to medium but can be changed to low or high
2025-03-19 21:12:02 +05:30
Debanjum
9b6d626a09 Fix to store e2b code execution text output file content as string
Previously was encoding E2B code execution text output content as b64.

This was breaking
- The AI model's ability to see the content of the file
- Downloading the output text file with appropriately encoded content

Issue created when adding E2B code sandbox in #1120
2025-03-19 20:09:41 +05:30
Artem Yurchenko
a7e261a191 Implement better bug issue template (#1129)
* Implement better bug issue template
* Fix IDs in new bug issue template
* Reduce, reorder and improve field descriptions in the bug issue template

---------

Co-authored-by: Debanjum <debanjum@gmail.com>
2025-03-18 20:53:57 +05:30
Debanjum
931f555cf8 Configure max allowed iterations in research mode via env var 2025-03-18 18:15:50 +05:30
Debanjum
2ab8e711d3 Fix Gemini models to output valid json when configured 2025-03-18 17:02:45 +05:30
sabaimran
ce60cb9779 Remove max-w 80vw, which was smushing AI responses 2025-03-13 13:44:05 -07:00
sabaimran
a3c4347c11 Add a one-click action to export all conversations. Add a self-service delete account action to the settings page 2025-03-12 23:54:02 -07:00
Debanjum
79816d2b9b Upgrade package dependencies of server, clients and docs 2025-03-12 00:22:08 +05:30
Debanjum
7bb6facdea Add support for Google Imagen AI models for image generation
Use the new Google GenAI client to generate images with Imagen
2025-03-11 23:39:46 +05:30
Debanjum
bd06fcd9be Stop using old google generativeai package to raise, catch exceptions 2025-03-11 23:39:46 +05:30
Debanjum
bdfa6400ef Upgrade to new Gemini package to interface with Google AI 2025-03-11 22:18:07 +05:30
Debanjum
2790ba3121 Update default temperature for calls to Gemini models to 0.6 from 0.2
This aligns with default temperature used by google ai studio and
may reduce loops and repetitions
2025-03-11 21:28:04 +05:30
Debanjum
50f71be03d Support Claude 3.7 and use its extended thinking in research mode
Claude 3.7 Sonnet is Anthropic's first reasoning model. It provides a
single model/api capable of standard and extended thinking. Utilize
the extended thinking in Khoj's research mode.

Increase default max output tokens to 8K for Anthropic models.
2025-03-11 21:27:59 +05:30
Debanjum
69048a859f Fix E2B tool description prompt to mention plotly package available 2025-03-11 02:20:06 +05:30
Debanjum
9751adb1a2 Improve Code Tool, Sandbox and Eval (#1120)
# Improve Code Tool, Sandbox
  - Improve code gen chat actor to output code in inline md code blocks
  - Stop code sandbox on request timeout to allow sandbox process restarts
  - Use tenacity retry decorator to retry executing code in sandbox
  - Add retry logic to code execution and add health check to sandbox container
  - Add E2B as an optional code sandbox provider

# Improve Gemini Chat Models
  - Default to non-zero temperature for all queries to Gemini models
  - Default to Gemini 2.0 flash instead of 1.5 flash on setup
  - Set default chat model to KHOJ_CHAT_MODEL env var if set
2025-03-09 18:49:59 +05:30
Debanjum
c133d11556 Improvements based on code feedback 2025-03-09 18:23:30 +05:30
Debanjum
94ca458639 Set default chat model to KHOJ_CHAT_MODEL env var if set
Simplify code log to set default_use_model during init for readability
2025-03-09 18:23:30 +05:30
Debanjum
7b2d0fdddc Improve code gen chat actor to output code in inline md code blocks
Simplify code gen chat actor to improve correct code gen success,
especially for smaller models & models with limited json mode support

Allow specify code blocks inline with reasoning to try improve
code quality

Infer input files based on user file paths referenced in code.
2025-03-09 18:23:30 +05:30
Debanjum
8305fddb14 Default to non-zero temperature for all queries to Gemini models.
It may mitigate the intermittent invalid json output issues. Model
maybe going into repetition loops, non-zero temp may avoid that.
2025-03-09 18:23:30 +05:30
Debanjum
45fb85f1df Add E2B as an optional code sandbox provider
- Specify E2B api key and template to use via env variables
- Try load, use e2b library when E2B api key set
- Fallback to try use terrarium sandbox otherwise
- Enable more python packages in e2b sandbox like rdkit via custom e2b template

- Use Async E2B Sandbox
- Parallelize file IO with sandbox
- Add documentation on how to enable E2B as code sandbox instead of Terrarium
2025-03-09 18:23:30 +05:30
Debanjum
b4183c7333 Default to gemini 2.0 flash instead of 1.5 flash on Gemini setup
Add price of gemini 2.0 flash for cost calculations
2025-03-07 13:48:15 +05:30
Debanjum
701a7be291 Stop code sandbox on request timeout to allow sandbox process restarts 2025-03-07 13:48:15 +05:30
Debanjum
ecc2f79571 Use tenacity retry decorator to retry executing code in sandbox 2025-03-07 13:48:15 +05:30
sabaimran
4a28714a08 Add retry logic to code execution and add health check to sandbox container 2025-03-07 13:48:15 +05:30
Debanjum
f13bdc5135 Log eval run progress percentage for orientation 2025-03-07 13:48:15 +05:30
Debanjum
bbe1b63361 Improve Obsidian Sync for Large Vaults (#1078)
- Batch sync files by size to try not exceed API request payload size limits
- Fix force sync of large vaults from Obsidian
- Add API endpoint to delete all indexed files by file type
- Fix to also delete file objects when call DELETE content source API
2025-03-07 13:47:21 +05:30
Debanjum
043de068ff Fix force sync of large vaults from Obsidian
Previously if you tried to force sync a vault with more than 1000
files it would only end up keeping the last batch because the PUT API
call would delete all previous entries.

This change calls DELETE for all previously indexed data first, followed by
a PATCH to index current vault on a force sync (regenerate) request.
This ensures that files from previous batches are not deleted.
2025-03-07 13:34:48 +05:30
Debanjum
86fa528a73 Add API endpoint to delete all indexed files by file type 2025-03-07 13:28:53 +05:30
Debanjum
b692e690b4 Rename and fix the delete content source API endpoint
- Delete file objects on deleting content by source via API
  Previously only entries were deleted, not the associated file objects
- Add new db adapter to delete multiple file objects (by name)
2025-03-07 13:28:53 +05:30
Debanjum
29403551b2 Batch sync files by size to not exceed API request payload size limits
This may help mitigate the issue #970
2025-03-04 09:31:48 +05:30
Debanjum
0ddb6a38b8 Update Khoj docs to mark LMStudio as not supported anymore
They seem to have deprecated json mode in their openai compatible API
which Khoj uses extensively.
2025-02-18 21:33:03 +05:30
sabaimran
de550e5ca7 re-enable markdown formatting when chatting with o3 2025-02-18 21:12:03 +05:30
sabaimran
0016fe06c9 Release Khoj version 1.36.6 2025-02-18 18:54:13 +05:30
sabaimran
7089ea1cf4 Remove experimental parenthesis from research mode ✁ 2025-02-18 08:59:12 +05:30
Debanjum
5a3c7b145a Decouple Django CSRF, ALLOWED_HOST settings for more complex setups
- Set KHOJ_ALLOWED_DOMAIN to the domain that Khoj is accessible on
  from the host machine. This can be the internal i.p or domain of the
  host machine.

  It can be used by your load balancer/reverse_proxy to access Khoj.
  For example, if the load balancer service is in the khoj docker
  network, KHOJ_DOMAIN will be `server' (i.e service name).

- Set KHOJ_DOMAIN to your externally accessible DOMAIN or I.P to avoid
  CSRF trusted origin or unset cookie issue when trying to access the
  khoj admin panel.

Resolves #1114
2025-02-17 15:47:35 +05:30
Debanjum
bb0828b887 Only show notes tool option to llm for selection when user has documents 2025-02-17 15:23:26 +05:30
Debanjum
5dfb59e1ee Show more references in teaser ref section of chat response on web app 2025-02-15 14:11:17 +05:30
sabaimran
b6e745336b Add s short description to explain what the create agent button does 2025-02-13 14:08:50 -08:00
sabaimran
848a91313d Move the create agent button to the bottom of the sidebar and fix experience when resetting settings 2025-02-12 19:24:02 -08:00
sabaimran
d0d30ace06 Add feature to create a custom agent directly from the side panel with currently configured settings
- Also, when in not subscribed state, fallback to the default model when chatting with an agent
- With conversion, create a brand new agent from inside the chat view that can be managed separately
2025-02-12 18:24:41 -08:00
sabaimran
5d6eca4c22 Fix automation retrieval validity check 2025-02-11 19:17:37 -08:00
sabaimran
51952364ab Release Khoj version 1.36.5 2025-02-11 13:23:30 -08:00
sabaimran
0211151570 Disable auto-setup of offline models if in non-interactive offline mode 2025-02-10 18:48:00 -08:00
sabaimran
589b047d90 Simplify agent picker selection in homepage mobile view 2025-02-10 15:05:30 -08:00
sabaimran
427ec061b4 Add auto redirect on delete of current conversation 2025-02-09 11:38:44 -08:00
sabaimran
bbb5fd667a update messaging in the welcome email 2025-02-07 15:10:59 -08:00
sabaimran
ff6cb80c84 Release Khoj version 1.36.4 2025-02-06 16:50:50 -08:00
sabaimran
031bccb628 Fix awkward padding in chat window 2025-02-06 16:31:25 -08:00
sabaimran
43e032e25a Improve handling of dark mode theme in order to avoid jitter when loading new page 2025-02-06 16:17:58 -08:00
sabaimran
2e01a95cf1 improve system prompt for generating mermaid.js diagrams 2025-02-06 15:05:30 -08:00
sabaimran
a2af6bea8e Release Khoj version 1.36.3 2025-02-04 13:23:41 -08:00
sabaimran
0d10c5fb02 Improve default selection of models to avoid infinite loops 2025-02-04 11:36:41 -08:00
sabaimran
24b1dd3bff Release Khoj version 1.36.2 2025-02-03 20:22:49 -08:00
sabaimran
4409a58794 set initial model of default state 2025-02-03 18:17:21 -08:00
sabaimran
51874c25d5 Prevent infinite loops in model selection logic by configuring an initial model state 2025-02-03 18:11:01 -08:00
sabaimran
489fa71143 Update the width for rending all conversation sessions 2025-02-03 15:49:43 -08:00
sabaimran
b354a37dcd Release Khoj version 1.36.1 2025-02-02 21:55:59 -08:00
sabaimran
61e48d686e Let file context buttons route to search page instead of settings for upload/manage 2025-02-02 12:26:13 -08:00
sabaimran
b4c467cd11 Remove shadows from reference panel trigger icons 2025-02-02 12:23:19 -08:00
sabaimran
a3d75e5241 When in mobile view, don't use the hover card in the model selector 2025-02-02 12:22:45 -08:00
sabaimran
4f79abb429 Release Khoj version 1.36.0 2025-02-02 08:39:21 -08:00
sabaimran
60e6913494 Merge pull request #1094 from khoj-ai/features/add-chat-controls
Make it easier to determine which model you're chatting with, and to effortlessly update said model from within a given chat. 

In this change, we introduce a side bar that allows users to quickly change their chat model, tools, custom instructions, and file filters, directly within the chat view. This removes the need for setting up custom agents for simple instructions and mitigates the requirement to go to the settings page to verify the chat model in action.

The settings page will still configure a per-user *default*, but the sidebar will allow for greater customization based on the needs of a conversation.

We also extend the chat model to include more attributes that help users make decisions about model selection, including `strengths` and `description`. This can help people quickly understand which model might work best for their use case.
2025-02-01 14:35:47 -08:00
sabaimran
c558bbfd44 Merge branch 'master' of github.com:khoj-ai/khoj into features/add-chat-controls 2025-02-01 14:25:58 -08:00
sabaimran
08bc1d3bcb Improve sizing / spacing of side bar 2025-02-01 14:25:45 -08:00
sabaimran
f3b2580649 Add back hover state with collapsed references 2025-02-01 14:03:17 -08:00
sabaimran
0645af9b16 Use the agent CM as a backup chat model when available / necessary. Remove automation as agent option. 2025-02-01 13:06:42 -08:00
Debanjum
f2eba667fc Fallback to schedule automation in UTC timezone if unset
- Handle null automation ids in calls to get_automation function
2025-02-01 19:12:51 +05:30
Debanjum
0bfa7c1c45 Add support for o3 mini model by openai 2025-02-01 02:51:13 +05:30
sabaimran
641f1bcd91 Only open the side bar automatically when there is no chat history && no pending messages. 2025-01-30 16:07:27 -08:00
sabaimran
b111a9d6c6 Release Khoj version 1.35.3 2025-01-30 11:48:33 -08:00
Yuto SASAKI
018bc718fc Fix typo in docs for Chat Model Type for Google Gemini setup (#1098) 2025-01-30 03:40:37 -08:00
sabaimran
b73f446713 Fallback to show raw outputted diagram if fails rendering 2025-01-29 21:50:28 -08:00
sabaimran
98e3f5162a Show generated diagram raw code if fails rendering 2025-01-29 21:49:33 -08:00
sabaimran
b5f99dd103 Rename custom instructions -> instructions 2025-01-29 15:06:22 -08:00
sabaimran
0ff33d4347 Merge branch 'master' of github.com:khoj-ai/khoj into features/add-chat-controls 2025-01-29 14:58:15 -08:00
sabaimran
c3cb6086e0 Add list typing to the updated_messages temporary variable 2025-01-29 14:19:57 -08:00
sabaimran
5ea056f03e Add custom handling logic when speaking with deepseak reasoner 2025-01-29 14:11:27 -08:00
sabaimran
d640299edc use is_active property for determine user subscription status 2025-01-29 14:10:59 -08:00
sabaimran
e2bfd4ac0f Change name of teams section to teams 2025-01-29 12:52:10 -08:00
sabaimran
58f77edcad Make conversation sessions rounded-lg 2025-01-29 12:47:36 -08:00
sabaimran
0b2305d8f2 Add an animation to opening and closing the thought process 2025-01-29 12:47:14 -08:00
sabaimran
67b2e9c194 Increase subscribed total entries size to 500 MB 2025-01-29 09:06:35 -08:00
sabaimran
01faae0299 Simplify the train of thought UI 2025-01-28 22:00:09 -08:00
sabaimran
59f0873232 Rename train of thought button 2025-01-28 21:32:21 -08:00
sabaimran
272764d734 Simplify the chat response / user message bubbles 2025-01-28 21:17:56 -08:00
sabaimran
b61226779e Simplify references section with icons in chat message 2025-01-28 18:13:26 -08:00
sabaimran
58879693f3 Simplify nav menu and add a teams section 2025-01-28 18:12:50 -08:00
sabaimran
e076ebd133 Make tools section of sidebar a popover to prevent increasing height 2025-01-28 18:12:13 -08:00
sabaimran
ee3ae18f55 add code, remove summarize from agent tools 2025-01-28 18:11:25 -08:00
sabaimran
ba7d53c737 Merge branch 'master' of github.com:khoj-ai/khoj into features/add-chat-controls 2025-01-28 13:01:06 -08:00
Debanjum
af49375884 Only show confusing fallback tokenizer used logs in high verbosity mode 2025-01-25 18:35:14 +07:00
Debanjum
aebdd174ec Make title of PeopleAlsoAsk section of online results optional 2025-01-25 18:35:14 +07:00
sabaimran
43c9ec260d Allow agent personality to be nullable, in which case the default prompt will be used. 2025-01-24 08:24:53 -08:00
sabaimran
2b9a61c987 Merge branch 'master' of github.com:khoj-ai/khoj into features/add-chat-controls 2025-01-23 11:43:51 -08:00
sabaimran
a3b5ec4737 Release Khoj version 1.35.2 2025-01-22 21:42:14 -08:00
sabaimran
938ef0a27b Update DB migration for better memory and speed efficiency 2025-01-22 20:48:37 -08:00
sabaimran
9fc825d7a6 Release Khoj version 1.35.1 2025-01-22 19:51:28 -08:00
sabaimran
5a3a897080 Temporarily move logic to associate entry and fileobject objects into the management command, out of automatic migrations 2025-01-22 19:50:22 -08:00
sabaimran
fd90842d38 Bump postgresql server dev version to 16 for latest ubuntu 2025-01-22 19:07:54 -08:00
sabaimran
1a0923538e Release Khoj version 1.35.0 2025-01-22 19:03:25 -08:00
sabaimran
dc6e9e8667 Skip showing hidden agents in the all conversations agent filter 2025-01-21 12:43:34 -08:00
sabaimran
c1b0a9f8d4 Fix sync to async issue when getting default chat model in hidden agent configuration API 2025-01-21 12:06:09 -08:00
sabaimran
e518626027 Add typing to empty list of operations 2025-01-21 11:56:52 -08:00
sabaimran
e3e93e091d automatically open the side bar when a new chat is created with the default agent. 2025-01-21 11:56:06 -08:00
sabaimran
c43079cb21 Add merge migration and add a new button for new convo in sidebar 2025-01-21 11:01:08 -08:00
sabaimran
5a36360408 Merge branch 'master' of github.com:khoj-ai/khoj into features/add-chat-controls 2025-01-21 10:32:47 -08:00
sabaimran
f96e5cd047 Allow file filter dropdown to pop up automatically when typing "file:" 2025-01-21 08:30:40 -08:00
sabaimran
6d1b16901d Fix the naming of framework > file 2025-01-21 07:24:43 -08:00
Debanjum
7fad1f43f6 Fix text quoting and format web app search page with prettier 2025-01-21 18:18:44 +07:00
sabaimran
8022f040d2 Fix spacing of pagination buttons 2025-01-20 17:23:14 -08:00
sabaimran
3b381a5fe8 Improve default state when no documents are found yet 2025-01-20 17:19:15 -08:00
sabaimran
e0dcd11f34 Allow browsing and discovery of knowlege base in the search page #1073
Currently, it's rather opaque and difficult to substantially browse through the uploaded knowledge base. Effectively, you can only do this through the small file modal in the settings page. 

Update to include all indexed files in the search page for viewing & deletion. Function to delete all files is still in the settings page.

Add a migration that associates file objects with `entry`s using a foreign key. Add a migration command that deletes dangling fileobjects.
2025-01-20 16:14:25 -08:00
sabaimran
4b35bee365 Merge branch 'master' of github.com:khoj-ai/khoj into HEAD 2025-01-20 15:35:19 -08:00
sabaimran
8ad60f53d3 Remove export from filefiltercombo box 2025-01-20 15:26:32 -08:00
sabaimran
9f18d6494f Add a file filter combobox for easier file filter selection 2025-01-20 14:39:29 -08:00
sabaimran
d36f235da5 Rename API to get all files, minor UI updates 2025-01-20 13:30:10 -08:00
sabaimran
849b7c7af6 Refresh current page after file deleted 2025-01-20 13:12:37 -08:00
sabaimran
edfed2d571 Merge branch 'master' of github.com:khoj-ai/khoj into features/add-a-knowledge-base-page 2025-01-20 13:11:45 -08:00
sabaimran
b661d2cba7 Merge pull request #1030 from Yash-1511/feat/autocomplete-file-query
## Description
Added file path autocompletion to enhance the search experience in Khoj. Users can now easily search for specific files by typing "file:" followed by the file name, with real-time suggestions appearing as they type.

## Changes
- Added new API endpoint `/api/file-suggestions` to get file path suggestions
- Enhanced search UI with dropdown suggestions for file paths
- Implemented debounced search to optimize API calls
- Added keyboard (Enter) and mouse click support for selecting suggestions

## Features
- Type "file:" to trigger file path suggestions
- Real-time filtering of suggestions as you type
- Top 10 alphabetically sorted suggestions
- Case-insensitive matching
- Keyboard and mouse interaction support
- Clear visual feedback with a dropdown UI
2025-01-20 12:35:43 -08:00
sabaimran
0c29c7a5bf Update layout and rendering of share page for hidden agent 2025-01-20 12:02:07 -08:00
sabaimran
000580cb8a Improve loading state when files not found and fix default state interpretation for model selector 2025-01-20 11:47:42 -08:00
sabaimran
235114b432 Fix agent data import across chat page + 2025-01-20 11:04:48 -08:00
sabaimran
d681a2080a Centralize use of useUserConfig and use that to retrieve default model and chat model options 2025-01-20 10:59:02 -08:00
sabaimran
a3fcd6f06e Fix import of AgentData from agentcard 2025-01-20 10:36:11 -08:00
sabaimran
d7800812ad Fix default states for the model selector 2025-01-20 10:18:09 -08:00
sabaimran
98baa93a31 Merge branch 'master' of github.com:khoj-ai/khoj into features/add-chat-controls 2025-01-20 09:36:42 -08:00
sabaimran
696551b686 Add typing to operations in merge migration file 2025-01-20 08:36:40 -08:00
sabaimran
d1e7b5b234 Add a merge migration to resolve differences 2025-01-20 08:34:57 -08:00
sabaimran
1f59afe962 Merge branch 'master' of github.com:khoj-ai/khoj into features/add-a-knowledge-base-page 2025-01-20 08:32:17 -08:00
sabaimran
016bcd7674 Revert page size to 10, rather than 1 2025-01-20 08:32:02 -08:00
sabaimran
8fe08eecce add --break-system-packages to bypass venv requirement 2025-01-20 00:21:27 -08:00
sabaimran
bf58d9430b downgrade postgres server pkg to 16 2025-01-20 00:15:56 -08:00
sabaimran
95ad1f936e upgrade postgres server to 17 2025-01-20 00:10:20 -08:00
sabaimran
a214bd4100 upgrade pg server dev version to 15 2025-01-20 00:05:35 -08:00
sabaimran
82ff74cfa9 Run on container with ubuntu latest for pytest gh action workflow 2025-01-19 23:57:57 -08:00
sabaimran
83d856f97d Add some basic pagination logic to the knowledge page to prevent overloading the Api or the client 2025-01-19 22:48:36 -08:00
sabaimran
59ee6e961a Only set hasmodified to true during model select if different from original model 2025-01-19 18:19:48 -08:00
sabaimran
e982398c2c Weird spacing issue resolve (it was because of the footer in the collapsed state still having some width) 2025-01-19 18:16:43 -08:00
sabaimran
dbce039033 Revert the fixed hack to hide horizontal spacing issue with sidebar because it breaks the animation on closed. Sigh. 2025-01-19 18:11:53 -08:00
sabaimran
0d38cc9753 Handle further edge cases when setting chat agent data and fix alignment of chat input / side panel 2025-01-19 17:59:37 -08:00
sabaimran
b248123135 Hook up hidden agent creation and update APIs to the UI
- This allows users to initiate hidden agent creation from the side bar directly. Any updates can easily be applied to the conversation agent.
2025-01-19 17:36:30 -08:00
sabaimran
be11f666e4 Initialize the concept in the backend of hidden agents
A hidden agent basically allows each individual conversation to maintain custom settings, via an agent that's not exposed to the traditional functionalities allotted for manually created agents (e.g., browsing, maintenance in agents page).

This will be hooked up to the front-end such that any conversation that's initiated with the default agent can then be given custom settings, which in the background creates a hidden agent. This allows us to repurpose all of our existing agents infrastructure for chat-level customization.
2025-01-19 12:16:37 -08:00
sabaimran
0a0f30c53b Update relevant agent tool descriptions
Remove text (as by default, must output text), and improve the Notes description for clarity
2025-01-19 12:14:28 -08:00
sabaimran
f10b072634 update looks & feel of chat side bar with model selector, checkboxes for tools, and actions (not yet implemented) 2025-01-19 12:13:37 -08:00
sabaimran
7837628bb3 Update existing agentData imports 2025-01-19 12:12:23 -08:00
sabaimran
7998a258b6 Add additional ui components for tooltip, checkbox 2025-01-19 12:08:02 -08:00
sabaimran
00370c70ed Consolidate the AgentData Type into the agentCard 2025-01-19 12:06:54 -08:00
Debanjum
fde71ded16 Upgrade web app dependencies 2025-01-19 13:44:59 +07:00
Debanjum
2d4633d298 Use encoded email, otp in login URL in email & web app sign-in flow
Previously emails with url special characters would not get
successfully identified for login. Account creation was fine due to
email being in POST request body. But login with such emails did not
work due to query params not being escaped before being sent to server

This change escapes both the code and email in login URL sent to
server. So login with emails containing special characters like
`email+khoj@gmail.com' works. It fixes both the URL web app sent by
web app directly and the magic link sent to users to their email

This change also fixes accessibility issue of having a DialogTitle in
DialogContent for screen readers.

Resolves #1090
2025-01-19 13:11:23 +07:00
Debanjum
51f3af11b5 Fix Qwen 2.5 14B model source to use Q4_K_M quantized model
The official Qwen2.5 14B model doesn't mention standard quantization
suffixes like Q4_K_M, so doesn't work with Khoj
2025-01-19 12:27:35 +07:00
sabaimran
f99bd3f1bc Merge branch 'master' of github.com:khoj-ai/khoj into features/add-chat-controls 2025-01-17 17:49:08 -08:00
sabaimran
af9e906cb5 Use python3 instead of python when running pip install commands in gh actions 2025-01-17 17:48:42 -08:00
sabaimran
c80e0883ee Use python3 instead of python for install pip commands in GH action 2025-01-17 17:36:46 -08:00
sabaimran
b8c866014d Improve instruction description for the agent command description for notes 2025-01-17 17:19:59 -08:00
sabaimran
7481f78f22 Remove unused API request 2025-01-17 17:18:47 -08:00
sabaimran
5aadba20a6 Add backend support for hidden agents (not yet enabled) 2025-01-17 16:46:37 -08:00
sabaimran
2fa212061d Add a ride hand side bar for chat controls 2025-01-17 16:45:50 -08:00
Tuğhan Belbek
849348e638 Handle additional HTTP redirect status code 308 in scheduled chat requests (#1088)
Closes #1067
2025-01-16 07:03:43 -08:00
Debanjum
00843f4f24 Release Khoj version 1.34.0 2025-01-16 12:11:28 +07:00
Debanjum
ad27f34c96 Support online search using Google Search API (#1087)
Add official Google Search API as an online search provider.
We currently support Serper.dev and Searxng as online search providers.
2025-01-16 11:41:59 +07:00
Debanjum
a649b03658 Support online search using Google Search API 2025-01-16 11:39:03 +07:00
Sam Ho
f8f159efac feat: add turnId handling to chat messages and history 2025-01-16 00:44:16 +00:00
sabaimran
42d4d15346 Merge pull request #1054 from khoj-ai/features/add-support-for-mermaidjs
We've been having issues generating diagrams with Excalidraw that are any degree of complexity. By contrast, LLMs are able to handle Mermaid.js syntax a lot better, as it's much more forgiving and has a simpler declarative style. Refer to https://mermaid.js.org/.

Update so that new diagrams are generated with Mermaid.js, while old diagrams generated with Excalidraw can still be viewed.
2025-01-15 11:55:12 -08:00
Debanjum
e2b2b3415e Fix handling of inline base64 images by Obsidian, Desktop clients
Fix for #1082 pushed down adding the `data:image/webp;base64' prefix
of the base64 images to the server image gen API. But the code on the
Obsidian and Desktop client still add these prefixes.

This change stops the Desktop, Obsidian clients from adding the prefix
as it is being handled by the API now. It should resolve showing
images inline in those clients as well
2025-01-15 23:34:23 +07:00
Debanjum
2e585efd2f Fix end with newline styling issue in style.css to pass lint checks 2025-01-15 19:43:02 +07:00
Debanjum
ed18c04576 Fix wrapping base64 generated image for inline display
Resolves #1082
2025-01-15 19:19:31 +07:00
Debanjum
f8b887cabd Allow using OpenAI (compatible) API for Speech to Text transcription 2025-01-15 19:19:31 +07:00
Debanjum
182c49b41c Prefer explicitly configured OpenAI API url, key for image gen model
Previously we'd use the general openai client, even if the image
generation model has a different api key and base url set.

This change uses the openai config of the image generation models when
set. Otherwise it fallbacks to use the first openai api provider set
2025-01-15 19:19:31 +07:00
Debanjum
24204873c8 Use same openai base url env var name as the official openai client
This eases re-use of the OpenAI API across all openai clients,
including chat, image generation, speech to text.

Resolves #1085
2025-01-15 19:19:30 +07:00
Debanjum
63dd3985b5 Support using Embeddings Model exposed via OpenAI (compatible) API (#1051)
This change adds the ability to use OpenAI, Azure OpenAI or any embedding model exposed behind an OpenAI compatible API (like Ollama, LiteLLM, vLLM etc.).

Khoj previously only supported HuggingFace embedding models running locally on device or via HuggingFaceW inference API endpoint. This allows using commercial embedding models to index your content with Khoj.
2025-01-15 17:39:54 +07:00
Debanjum
a6bf6803b6 Add docs on how to add, edit search model configs when self-hosting 2025-01-15 17:30:18 +07:00
Debanjum
92a1ec7afc Do not auto restart khoj docker services by default
Let folks who want to add that, add it manually if they want to. It
creates too much noise for folks having trouble with self-host setup
2025-01-15 13:09:50 +07:00
Debanjum
85c537a1de Set default PORT arg in Dockerfile to default Khoj port, 42110 2025-01-15 13:09:50 +07:00
Debanjum
9355381fac Catch error in call to data sources, output format selection tool AI
Previously if the call to this tool AI failed, the API call would
non-gracefully fail on server. This would leave the client hanging in
a wierd state (e.g with spinner running on web app with no indication
of issue).

Also do not show filters in query debug log lines when no filters in query
2025-01-15 13:09:50 +07:00
Debanjum
24ab8450ba Handle scenario where read chat stream error is not json on web app 2025-01-15 13:09:50 +07:00
sabaimran
0b775c77d3 Merge branch 'master' of github.com:khoj-ai/khoj into features/add-a-knowledge-base-page 2025-01-13 15:07:59 -08:00
Sam Ho
fc6fab4cce chore: fix format issue from pre-commit hook - trailing-whitespace and end-of-file-fixer 2025-01-13 20:07:48 +00:00
sabaimran
7f329e7e9d Fix configuration of name field for chatmodel options during initalization 2025-01-12 22:37:08 -08:00
sabaimran
1a00540ee9 Improve error handling in mermaid chart rendering 2025-01-12 22:36:31 -08:00
Osama Ata
96e3d0a7b9 Fix stale lmstudio documentation to set ai model api via admin panel (#1075)
Use new name `Ai Model API` instead of  `OpenAI Processor Conversation Config`
2025-01-12 03:06:01 -08:00
Yash-1511
27165b3f4a fix: review suggestions 2025-01-12 15:12:14 +05:30
Debanjum
6bd9f6bb61 Give a shorter, simpler name to github workflow to deploy docs 2025-01-12 10:54:56 +07:00
Sam Ho
93687f141a feat: do not show delete button on system messages 2025-01-11 17:35:57 +00:00
Sam Ho
a9c180d85f feat: add delete chat message action to the Obsidian plugin 2025-01-11 17:19:40 +00:00
Debanjum
51a774c993 Add contrast to setting card inputs in dark mode on web app 2025-01-11 14:50:47 +07:00
Debanjum
9e8b8dc5a2 Toggle showing api key on web settings page via a visibility toggle
- Background
  Access to the clipboard API is disabled by certain browsers in non
  localhost http scenarios for security reasons.

  So the copy API key button doesn't work when khoj is self-hosted
  with authentication enabled at a non localhost domain

- Change
  This change enables copying API keys by manual text highlight + copy
  if copy button is disabled

Resolves #1070
2025-01-11 14:50:47 +07:00
Debanjum
25c39bd7da Extract api keys setting card into separate component on web app 2025-01-11 14:50:46 +07:00
sabaimran
c30047e859 Fix Obsidian style.css 2025-01-10 22:18:44 -08:00
sabaimran
da2b89e46a Merge branch 'master' of github.com:khoj-ai/khoj into features/add-a-knowledge-base-page 2025-01-10 22:18:14 -08:00
sabaimran
f170487338 Fix apostrophe in the add documents modal 2025-01-10 21:58:17 -08:00
sabaimran
be4b091a21 Add new line to styles.css 2025-01-10 21:52:52 -08:00
sabaimran
f398e1eb0c Add codeblock rendering for the mermaidjs diagram in obsidian 2025-01-10 21:46:39 -08:00
Debanjum
6e955e158b Use normalized email address for new users
Not check email deliverability for now to allow air-gapped usage or
authenticated/multi-user setups with admin managed otp

Closes #1069
2025-01-11 12:28:40 +07:00
sabaimran
c441663394 Merge branch 'master' of github.com:khoj-ai/khoj into features/add-support-for-mermaidjs 2025-01-10 21:25:33 -08:00
sabaimran
85c34a5f0f Merge pull request #1018 from hjamet/master
This PR delivers comprehensive improvements to the Khoj plugin across multiple key areas:

🔍 Search Enhancements:
- Added visual loading indicators during search operations
- Implemented color-coded results to distinguish between vault and external files
- Added abort logic for previous requests to improve performance
- Enhanced search feedback with clear status indicators
- Improved empty state handling

🔄 Synchronization Improvements:
- Added configurable sync interval setting in minutes
- Implemented manual "Sync new changes" command
- Enhanced sync timer management with automatic restart
- Improved notification system for sync operations

📁 Folder Management:
- Added granular folder selection for sync
- Implemented intuitive folder suggestion modal
- Enhanced folder list visualization

💅 UI/UX Improvements:
- Added loading animations and spinners
- Enhanced search results visualization with color coding
- Refined chat interface styling
- Improved overall settings panel organization

🔧 Technical Improvements:
- Refactored search and synchronization logic
- Implemented proper request cancellation
- Enhanced error handling and user feedback
- Improved code organization and maintainability
2025-01-10 21:24:12 -08:00
sabaimran
57545c1485 Fix the migration script to delete orphaned fileobjects
- Remove knowledge page from the sidebar
- Improve speed and rendering of the documents in the search page
2025-01-10 21:06:48 -08:00
sabaimran
d77984f9d1 Remove separate knowledge base file - consolidated in the search page 2025-01-10 18:57:38 -08:00
sabaimran
f2c6ce2435 Improve rendering of the file objects and sort files by updated_date 2025-01-10 18:18:15 -08:00
Patrick Jackson
6e0c767ff0 Use the configured OpenAI Base URL for Automations (#1065)
This change makes Automations (and possibly other entrypoints) use the configured OpenAI-compatible server if that has been set. Without this change it tries to use the hardcoded OpenAI provider.
2025-01-10 17:17:51 -08:00
sabaimran
454a752071 Initial commit: add a dedicated page for managing the knowledge base
- One current issue in the Khoj application is that managing the files being referenced as the user's knowledge base is slightly opaque and difficult to access
- Add a migration for associating the fileobjects directly with the Entry objects, making it easier to get data via foreign key
- Add the new page that shows all indexed files in the search view, also allowing you to upload new docs directly from that page
- Support new APIs for getting / deleting files
2025-01-10 16:24:50 -08:00
Debanjum
1b5826d8b6 Support using Embeddings Model exposed via OpenAI (compatible) API 2025-01-10 23:48:04 +07:00
Debanjum
65f1c27963 Remove old, big warning about Khoj not configured on server init
- Just say using default config. This old khoj.yml settings mechamism
  isn't standard, so not having a configured khoj.yml isn't a concern
- Deep link to desktop download instead of the whole download page as
  android etc. are also on it, which don't help with syncing docs
2025-01-10 23:46:27 +07:00
Debanjum
3cc6597b49 Support Azure OpenAI API endpoint (#1048)
OpenAI chat models deployed on Azure are (ironically) not OpenAI API compatible endpoints.
This change enables using OpenAI chat models deployed on Azure with Khoj.
2025-01-10 08:35:03 -08:00
sabaimran
bac90ad69d Upgrade deploy-pages action to vv4 2025-01-09 19:04:31 -08:00
Debanjum
2069f571c8 Upgrade upload-artifact gh action to v4 as <=v3 deprecated
This started failing github workflow jobs
2025-01-10 00:41:24 +07:00
Debanjum
dd63bd8bcf Fix dark mode dropdown colors of phone no. country code on web settings page
Resolves #1046
2025-01-10 00:10:51 +07:00
Debanjum
bb6a6cbe19 Restart Khoj docker services unless stopped. Remove default network
- Seems less aggressive to use unless-stopped versus always.
- Default network is used anyway, so doesn't seem necessary to specify
2025-01-09 22:08:48 +07:00
Debanjum
01d27f5220 Do not show user logout button on web app side pane in anoymous mode
Refer
https://github.com/khoj-ai/khoj/issues/1050#issuecomment-2579119234
2025-01-09 21:18:50 +07:00
Debanjum
a739936563 Mark Github integration as unmaintained in documentation
Also mention what the Reflective Questions table is about

Resolves #1059
2025-01-09 20:42:38 +07:00
Debanjum
1eaa54b0ae Make PAT token requirement optional for Github indexing for now
The github integration has not been tested and may still be broken.
This change at least makes it easier to add repositories without
needing a PAT token if/when it does work.
2025-01-09 20:42:38 +07:00
sabaimran
ec02757fd1 Add an export feature along with the mermaid diagram. Add sidebar to loading page. 2025-01-08 23:53:58 -08:00
sabaimran
889f34c7bf Adjust typing and error handling for incorrect diagrams 2025-01-08 23:22:16 -08:00
sabaimran
11fcf2f299 Remove dangling response type 2025-01-08 22:10:59 -08:00
sabaimran
6b0a49b12d Add the mermaid package and apply front-end parsing
- Add the mermaid package and apply front-end parsing for interpreting the diagrams. Retain processing of the excalidraw type for backwards compatibility
2025-01-08 22:09:35 -08:00
sabaimran
539ce99343 Add backend support for parsing and processing and storing mermaidjs diagrams
- Replace default diagram output from excalidraw to mermaid
- Retain typing of the excalidraw type for backwards compatibility in the chatlog
2025-01-08 22:08:40 -08:00
sabaimran
c448c49811 Clean-up some code on homepage and disable initial card animations because of jitter 2025-01-08 22:07:23 -08:00
Ikko Eltociear Ashimine
9ce9f02886 Fix typo in Admin Doc (#1034)
appropiate -> appropriate
2025-01-08 21:27:34 -08:00
sabaimran
ad5f0c7a02 Merge pull request #1029 from DPS0340/master
Improve docker-compose.yml

- Do not expose dependencies on host internet
- Put all services on the same network
2025-01-08 10:42:47 -08:00
Henri Jamet
f42b0cb08c Refactor comments and CSS for improved clarity in Khoj plugin
- Translated comments from French to English for better accessibility and understanding.
- Updated CSS comment for loading animation to reflect the change in language.
- Enhanced code readability by ensuring consistent language usage across multiple files.
- Improved user experience by clarifying the purpose of various functions and settings in the codebase.
2025-01-08 09:31:43 +01:00
sabaimran
875cdde9b9 Release Khoj version 1.33.2 2025-01-07 15:32:18 -08:00
sabaimran
5c5c4a6bbc Add help text for Enterprises in the README 2025-01-07 14:55:57 -08:00
sabaimran
8d028e10c6 Fix populating login url in sign in email 2025-01-07 14:53:00 -08:00
omahs
36bdaedd2d Fix typos in Khoj Docs (#1033) 2025-01-07 15:55:57 +07:00
sabaimran
25c1c1c591 Release Khoj version 1.33.1 2025-01-06 09:08:01 -08:00
sabaimran
689d9d8b3a Update formatting in admin.py and utils.py 2025-01-06 09:07:28 -08:00
thinker007
aa442c28eb Handle reporting chat estimated cost when some fields unavailable (#1026)
Fix AttributeError: 'NoneType' object has no attribute 'model_extra'

* cost = chunk.usage.model_extra.get("estimated_cost", 0) if hasattr(chunk, "usage") and chunk.usage else 0  # Estimated costs returned by DeepInfra API
2025-01-06 09:03:49 -08:00
sabaimran
4aed6f7e08 Add a link around the header khojlogotype to go home 2025-01-06 08:55:00 -08:00
Debanjum
266d274e21 Make automation should_notify check robust to non json mode chat models
Use clean_json to handle automation should_notify check For gemini and
other chat models where enforcing json mode is problematic, not supported
2025-01-06 20:16:32 +07:00
Debanjum
9a5e3583cf Remove bullet styling only from sidebar items on web app
Previous fix had removed bullet styling from all components in web
app. This made chat messages on the web app lose bullet styling too.
2025-01-06 20:15:42 +07:00
Debanjum
dc0bc5bcca Evaluate information retrieval quality using eval script
- Encode article urls in filename indexed in Khoj KB
  Makes it easier for humans to compare, trace retrieval performance
  by looking at logs than using content hash (which was previously
  explored)
2025-01-06 13:19:52 +07:00
Debanjum
daeba66c0d Optionally pass references used by agent for response to eval scorers
This will allow the eval framework to evaluate retrieval quality too
2025-01-06 13:19:52 +07:00
Debanjum
8231f4bb6e Return accuracy as decision to generalize across IR & standard scorers 2025-01-06 13:19:52 +07:00
Jiho Lee
c1c086e431 fix: Use localhost on SEARXNG_BASE_URL
Co-authored-by: sabaimran <65192171+sabaimran@users.noreply.github.com>
2025-01-06 13:32:06 +09:00
sabaimran
eb9aadf72a Add an Obsidian README documentation for development 2025-01-05 19:06:27 -08:00
sabaimran
e89e49818b Merge pull request #1028 from ReallyVirtual/patch-1
Update image_generation.md
2025-01-05 13:53:00 -08:00
sabaimran
616cc189d1 Remove bullet points from li styling explicitly 2025-01-05 13:52:08 -08:00
sabaimran
a5705a5aa6 After agent prompt safe check is parsed as json, load it into a json object 2025-01-05 13:40:20 -08:00
Yash-1511
f306159a5a feat: add autocomplete suggestions feature in search page 2025-01-05 17:30:00 +05:30
Jiho Lee
a5c7315874 feat: Improve docker-compose.yml
- Remove host port mappings on dependencies
- Add 'restart: always'
- Add default network for lookup by docker dns
2025-01-05 15:17:50 +09:00
Sohaib Athar
95a2387e9b Update image_generation.md
Fixed typo
2025-01-05 07:25:41 +05:00
sabaimran
756f4a2a66 Update regeneration logic to run for all entries now that we have a single search model ID 2025-01-03 15:16:10 -08:00
Debanjum
33f85b4a55 Fix title of Query Filters documentation 2025-01-01 18:56:16 +08:00
sabaimran
0ef787b57c Set waitBeforeSeconds to 0 2024-12-30 15:08:34 -08:00
Debanjum
90b4e03454 Pull out query filters as top level documentation page
- Note perf eval from 2022
- Update links to query-filters in docs
- Fix links
- Update image model docs
2024-12-30 14:35:05 -08:00
sabaimran
8f69eb949b Release Khoj version 1.33.0 2024-12-29 10:39:26 -08:00
Henri Jamet
1aff78a969 Enhance Khoj plugin search functionality and loading indicators
- Added visual loading indicators to the search modal for improved user experience during search operations.
- Implemented logic to check if search results correspond to files in the vault, with color-coded results for better clarity.
- Refactored the getSuggestions method to handle loading states and abort previous requests if necessary.
- Updated CSS styles to support new loading animations and result file status indicators.
- Improved the renderSuggestion method to display file status and provide feedback for files not in the vault.
2024-12-29 16:19:42 +01:00
Henri Jamet
7d28b46ca7 Implement sync interval setting and enhance synchronization timer in Khoj plugin
- Added a new setting for users to configure the sync interval in minutes, allowing for more flexible automatic synchronization.
- Introduced methods to start and restart the synchronization timer based on the configured interval.
- Updated the synchronization logic to use the user-defined interval instead of a fixed 60 minutes.
- Improved code readability and organization by refactoring the sync timer logic.
2024-12-29 13:23:08 +01:00
Henri Jamet
c5c9e0072c Enhance Khoj plugin settings and UI for folder synchronization
- Added a new setting to manage sync folders, allowing users to specify which folders to sync or to sync the entire vault.
- Implemented a modal for folder suggestions to facilitate folder selection.
- Updated the folder list display to show currently selected folders with options to remove them.
- Improved CSS styles for chat interface and folder list for better user experience.
- Refactored code for consistency and readability across multiple files.
2024-12-29 13:07:21 +01:00
Henri Jamet
833ab52986 Add manual sync command to Khoj plugin
- Introduced a new command 'Sync new changes' to allow users to manually synchronize new modifications.
- The command updates the content index without regenerating it, ensuring only new changes are synced.
- User-triggered notifications are displayed upon successful sync.
2024-12-29 12:51:52 +01:00
Debanjum
1a43ca75f3 Update to latest jinja python package dependency 2024-12-27 01:44:41 -08:00
Debanjum
ca197ba9ba Improve Automation Flexibility and Automation Email Format (#1016)
- Format AI response to send in automation email
- Let Khoj infer chat query based on user automation query
- Decide if automation emails should be sent based on response
  - Fix the `to_notify_or_not` decider AI
  - Ask reason before decision to improve to_notify decider AI
- Show error message on web app when fail to create/update automation
2024-12-27 01:36:38 -08:00
Debanjum
90685ccbb0 Show error message if update automation via web app fails 2024-12-26 22:21:56 -08:00
Debanjum
c4bb92076e Convert async create automation api endpoints to sync 2024-12-26 21:59:55 -08:00
Debanjum
9674de400c Format AI response to send in automation email
Previously we sent the AI response directly. This change post
processes the AI response with the awareness that it is to be sent to
the user as an email to improve rendering and quality of the emails.
2024-12-26 21:04:50 -08:00
Debanjum
6d219dcc1d Switch to let Khoj infer chat query based on user automation query
This tries to decouple the automation query from the chat query. So
the chat model doesn't have to know it is running in an automation
context or figure how to notify user or send automation response. It
just has to respond to the AI generated `query_to_run' corresponding
to the `scheduling_request` automation by the user.

For example, a `scheduling_request' of `notify me when X happens'
results in the automation calling the chat api with a `query_to_run`
like `tell me about X` and deciding if to notify based on information
gathered about X from the scheduled run. If these two are not
decoupled, the chat model may respond with how it can notify about X
instead of just asking about it.

Swap query_to_run with scheduling_request on the automation web page
2024-12-26 21:04:50 -08:00
Debanjum
3600a9a4f3 Ask reason before decision to improve to_notify decider automation AI
Previously it just gave a decision. This was hard to debug during
prompt tuning. Asking for reason before decision improves models
decision quality.
2024-12-26 21:04:50 -08:00
Debanjum
dcc5073d16 Fix the to_notify decider automation chat actor. Add detailed logging 2024-12-26 21:04:50 -08:00
sabaimran
03b4667acb Merge pull request #1017 from khoj-ai/features/update-home-page
- Rather than chunky generic cards, make the suggested actions more action oriented, around the problem a user might want to solve. Give them follow-up options. Design still in progress.
2024-12-24 12:12:37 -08:00
sabaimran
5985ef4c7c Further improve prompts 2024-12-24 12:11:27 -08:00
sabaimran
012e0ef882 Add tooltip for file input ref. 2024-12-24 11:20:39 -08:00
sabaimran
a58b3b4a37 Remove some of the step one suggestions 2024-12-24 10:52:42 -08:00
sabaimran
cf78f426d3 Merge branch 'master' of github.com:khoj-ai/khoj into features/update-home-page 2024-12-24 09:52:06 -08:00
sabaimran
d27d291584 Merge pull request #1015 from khoj-ai/features/clean-up-authenticated-data
- De facto, was being assumed everywhere if authenticatedData is null, that it's not logged in. This isn't true because the data can still be loading. Update the hook to send additional states.
- Bonus: Delete model picker code and a slew of unused imports.
2024-12-24 09:51:39 -08:00
sabaimran
cd4cf4f9f6 Merge pull request #1014 from khoj-ai/features/improve-agent-management
- Add support for seeing all steps of the agent modification flow via tabs at the top of the modal
- Separate knowledge base & tool selection into two separate parts
2024-12-24 09:50:38 -08:00
sabaimran
3e6ba45cbe Merge pull request #1013 from khoj-ai/features/use-sidebar
Use the [shadcn sidebar](https://ui.shadcn.com/docs/components/sidebar#sidebarmenusub) across Khoj and standardize how the side panel experience works across the app. This helps us generalize the code better, while re-using the same components. 

Deprecates current usage of the chat history side panel, replacing it with the new `appSidebar.tsx` component.

We'll eventually move out the `Manage Files` section and move it into a separate panel dedicated to chat-level controls.
2024-12-24 09:50:09 -08:00
sabaimran
95fdbe13ae move all conversations button to bottom of side panel 2024-12-23 23:13:19 -08:00
sabaimran
d0256f267e Fix submit state for form with buttons sticky to button 2024-12-23 23:10:19 -08:00
Debanjum
0d5fc70aa3 Fix colors, title on create agent card 2024-12-23 20:01:56 -08:00
Debanjum
0b91383deb Make post oauth next url redirect more robust, handle q params better 2024-12-23 17:50:45 -08:00
Debanjum
17f8ba732d Autofocus on email input box when email sign-in selected on web app 2024-12-23 17:49:41 -08:00
sabaimran
fb111a944b Use disable_https flag instead of is_in_debug_mode to decide whether to redirect google auth request 2024-12-23 17:23:47 -08:00
sabaimran
3fd0c202ea Allow better spacing in the agent card and make the buttons sticky 2024-12-23 16:43:50 -08:00
sabaimran
c83709fdd1 Further clean up in home page initial cards experience 2024-12-22 18:11:19 -08:00
sabaimran
4c4f4401b1 Make suggestion cards a little more minimal 2024-12-22 11:54:27 -08:00
sabaimran
90b02b4cfe Merge branch 'features/clean-up-authenticated-data' of github.com:khoj-ai/khoj into features/update-home-page 2024-12-22 11:26:29 -08:00
sabaimran
798837378f Improve mobile friendliness and highlight missing necessary data 2024-12-22 11:02:50 -08:00
sabaimran
7032ccf130 Show create agent button when not logged in agents page 2024-12-22 10:01:15 -08:00
sabaimran
0fefbac89f Improve sidebar experience for not logged in state 2024-12-22 09:58:21 -08:00
sabaimran
60f80548a4 Remove unused span text 2024-12-22 09:21:40 -08:00
sabaimran
9f84f5dcc7 Give more breathing space to the sidebar footer 2024-12-22 09:18:05 -08:00
sabaimran
dc17272f70 Fix some spacing in the nav menu 2024-12-22 09:01:09 -08:00
sabaimran
45da6ec750 Separate the shorthand of each suggestion card from the prefilled text 2024-12-21 20:46:57 -08:00
sabaimran
bf405f50d7 Fix spacing of main content in the settings page 2024-12-21 20:46:21 -08:00
sabaimran
62dd4c55d4 Further improve UX of the suggestion cards 2024-12-21 20:10:54 -08:00
sabaimran
8c9c57e060 Merge branch 'features/clean-up-authenticated-data' of github.com:khoj-ai/khoj into features/update-home-page 2024-12-21 19:32:07 -08:00
sabaimran
8c6b4217ae Set width of chat layout to 100% 2024-12-21 19:29:37 -08:00
sabaimran
a17cc9db38 Fix handling 403 forbidden error from auth response 2024-12-21 19:19:35 -08:00
sabaimran
95826393e1 Update the home page suggestion cards
- Rather than chunky generic cards, make the suggested actions more action oriented, around the problem a user might want to solve. Give them follow-up options. Design still in progress.
2024-12-21 18:57:19 -08:00
Debanjum
8d129c4675 Bump default max prompt size for commercial chat models 2024-12-21 17:31:05 -08:00
Debanjum
37ae48d9cf Add support for OpenAI o1 model
Needed to handle the current API limitations of the o1 model.
Specifically its inability to stream responses
2024-12-21 17:25:32 -08:00
sabaimran
2c7c16d93e Fix conditional reference to is mobile width hook 2024-12-21 12:48:29 -08:00
sabaimran
e9dae4240e Clean up all references to authenticatedData
- De facto, was being assumed everywhere if authenticatedData is null, that it's not logged in. This isn't true because the data can still be loading. Update the hook to send additional states.
- Bonus: Delete model picker code and a slew of unused imports.
2024-12-21 08:45:43 -08:00
sabaimran
b1c5c5bcc9 Fix path to component library in shadcn sidebar 2024-12-20 15:48:57 -08:00
sabaimran
cc7fd1163f Use tabs to separate sections in the agent mod form
- Add knowledge base as a separate section, apart from tools
- This makes it easier to navigate the different components quickly
2024-12-20 14:41:15 -08:00
sabaimran
078753df30 Add chatwoot to the frame-src CSP 2024-12-20 13:33:48 -08:00
sabaimran
efe812e323 Add componets for tabs in agent page 2024-12-20 13:33:20 -08:00
sabaimran
ba792c02ba Improve share chat UI for alignment 2024-12-20 12:28:31 -08:00
sabaimran
7770caa793 Add side bar inset to home page. Simplify automations card. 2024-12-20 11:37:23 -08:00
sabaimran
b8a9dcd600 Improve mobile layout with sidebar inset and fix dark mode logo 2024-12-19 23:23:52 -08:00
sabaimran
b1880d9c9d Add side bar to search page 2024-12-19 22:46:50 -08:00
sabaimran
02c503e966 Further improve mobile layout with side panel 2024-12-19 22:00:32 -08:00
sabaimran
43331f7730 Remove unused css classes 2024-12-19 21:36:35 -08:00
sabaimran
b644bb8628 Further improve mobile friendliness 2024-12-19 21:34:04 -08:00
sabaimran
9d5480d886 Improve mobile friendlinses across chat and home pages. 2024-12-19 20:33:53 -08:00
sabaimran
68af10c805 Use the new shadcn sidebar for khoj nav and actions
- Use the sidebar across all pages to quickly navigate through the app, access settings, and go to past chats
- Pending: mobile friendliness
2024-12-19 20:10:03 -08:00
sabaimran
7eb15bf0a9 Update shadcn components 2024-12-19 14:33:36 -08:00
sabaimran
a4aeb9fdf3 Simplify the home page color scheme and overall design 2024-12-19 14:02:53 -08:00
Debanjum
0ae21e5628 Use icons, not text labels, for sidebar nav items on docs website 2024-12-17 20:49:09 -08:00
sabaimran
cafe1b0655 Remove error log line for payload inclusion 2024-12-17 19:48:22 -08:00
Debanjum
1e3f452d15 Handle sharing old conversation publically even if they have no slug
New conversation have a slug, but older conversation may not. This
change allows those older conversations to still be shareable by using
a random uuid for constructing their url instead
2024-12-17 19:44:36 -08:00
Debanjum
90b7ba51a4 Make agent prompt safety checker more forgiving and concise
- Some of the instructions were duplicated (e.g illegal, harmful)
- Return format requested was inconsistent
- Safety prompt felt overtly prudish which lowered their utility.
  Make it laxer for now, add checks later if required
2024-12-17 19:44:36 -08:00
sabaimran
bcee2ea01a Release Khoj version 1.32.2 2024-12-17 16:03:29 -08:00
sabaimran
92144c8102 Remove release step in todesktop flow, since we need to run releases manually now
- Leaving it commented out for the time being so we can revisit automating this later
2024-12-17 16:02:45 -08:00
sabaimran
f291884921 If not in debug mode, force google auth to use the https protocol 2024-12-17 15:44:18 -08:00
sabaimran
bcc1bc6854 Log the payload sent temporarily in order to help with debugging 2024-12-17 15:37:42 -08:00
sabaimran
7ca2553d17 Update login popup copy 2024-12-17 15:20:44 -08:00
sabaimran
ef99d8c28e Add more descriptive error logs when google auth token verification fails 2024-12-17 15:18:38 -08:00
Debanjum
60d55a83c4 Use khoj logo via url in readme to load in other locations like pypi 2024-12-17 14:00:53 -08:00
Debanjum
10bd56d2b9 Attest Khoj pypi package by upgrading pypi publish gh action
- Print hash in CI to ease verifying ci built python package matches
  khoj package published on pypi
- Newer pypi publish github action should speed up workflow by ~30s
2024-12-17 13:40:39 -08:00
sabaimran
2e80a1ce7c Release Khoj version 1.32.1 2024-12-17 13:28:00 -08:00
Debanjum
df15f00243 Tag docker images with latest tag in dockerize workflow on release 2024-12-17 13:18:51 -08:00
sabaimran
ded168dae3 Release Khoj version 1.32.0 2024-12-17 12:29:20 -08:00
sabaimran
f6abfcfa6b Use latest release version for pypi gh action to publish 2024-12-17 12:19:42 -08:00
Debanjum
c20364efcb Upgrade web app next.js, shadcn and other package dependencies 2024-12-17 11:15:22 -08:00
Debanjum
63d2c6f35a Allow research mode and other conversation commands in automations (#1011)
Major
---
Previously we couldn't enable research mode or use other slash
commands in automated tasks.

This change separates determining if a chat query is triggered via
automated task from the (other) conversation commands to run the query
with.

This unlocks the ability to enable research mode in automations
apart from other variations like /image or /diagram etc.

Minor
---
- Clean the code to get data sources and output formats
- Have some examples shown on automations page run in research mode now
2024-12-17 00:44:51 -08:00
sabaimran
3b050a33bb Include resend as a default dependency, rather than restricting to prod 2024-12-16 22:24:41 -08:00
sabaimran
741e9f56f9 Update admin button for getting login url to include user email 2024-12-16 22:16:05 -08:00
Debanjum
88aa8c377a Support online search with Searxng as zero config, self-hostable solution (#1010)
This allows online search to work out of the box again 
for self-hosting users, as no auth/api key setup required.

Docker users do not need to change anything in their setup flow.
Direct installers can setup Searxng locally or use public instances if
they do not want to use any of the other providers (like Jina, Serper)

Resolves #749. Resolves #990
2024-12-16 18:59:09 -08:00
sabaimran
7677bb14f8 Update docs home page 2024-12-16 18:48:42 -08:00
sabaimran
1bd0d46b3d Update the theme color used in our docs to match the theme in our emails 2024-12-16 18:42:50 -08:00
sabaimran
432e901087 Update the online search self-hosting instructions to reflect new setup 2024-12-16 18:37:20 -08:00
sabaimran
af2553a890 Remove the Searxng API key env variable for simplicity 2024-12-16 18:36:55 -08:00
sabaimran
19d80d190e Add latest tag to the khoj cloud description for prod 2024-12-16 17:56:12 -08:00
sabaimran
d62cc0d539 Merge branch 'master' of github.com:khoj-ai/khoj into support-online-search-via-searxng 2024-12-16 17:55:06 -08:00
sabaimran
6f3218f487 Merge pull request #1008 from khoj-ai/features/new-sign-in-page
- Make it easier to quickly create the account without losing track of where you are
- Show some capabilities before you sign on
2024-12-16 17:54:43 -08:00
sabaimran
753859fbe0 Make the docs and github buttons on the sign in email less prominent 2024-12-16 17:49:13 -08:00
sabaimran
20888d3930 Clarify some of the language in the sign in email 2024-12-16 17:47:12 -08:00
sabaimran
13e7455f56 Use lowercase c in click 2024-12-16 17:44:08 -08:00
sabaimran
d17a9ba4c8 Fix return data for expired code user 2024-12-16 17:33:53 -08:00
sabaimran
efb0b9f495 Gracefully handle error when user login code is expired 2024-12-16 16:47:54 -08:00
sabaimran
064f7e48ca Update various copy texts for OG metadata and such 2024-12-16 16:40:46 -08:00
sabaimran
28b8f9105d Remove parenthetical from email template 2024-12-16 13:40:41 -08:00
Debanjum
9d02978f6e Support online search with Searxng as turnkey, self-hostable solution
This allows online search to work out of the box again for
self-hosting users, as no auth/api key setup required.

Docker users do not need to change anything in their setup flow.
Direct installers can setup searxng locally or use public instances if
they do not want to use any of the other providers (like Jina, Serper)

Resolves #749. Resolves #990
2024-12-16 12:53:38 -08:00
Debanjum
9c64275dec Auto redirect requests to use HTTPS if server is using SSL certs 2024-12-16 12:53:38 -08:00
sabaimran
ae9750e58e Add rate limiting to OTP login attempts and update email template for conciseness 2024-12-16 11:59:45 -08:00
sabaimran
b7783357fa Decrease timeout limit for verification codes to 5 minutes 2024-12-16 09:07:34 -08:00
sabaimran
5d3da3340f Include email in verification API 2024-12-15 13:54:41 -08:00
sabaimran
8e3313156e Simplify the magic link email a little bit 2024-12-14 11:19:31 -08:00
sabaimran
6a56140360 Allow users to directly enter their unique code when logging in
- Code automatically becomes invalid after 30 minutes
2024-12-14 11:06:05 -08:00
sabaimran
c25174e8d4 Apply more finished styling to login features, make the pop-up mobile friendly 2024-12-14 09:46:19 -08:00
sabaimran
73c1fe6ae1 Add text overlay to caption the different assets 2024-12-13 13:45:31 -08:00
Debanjum
132f2c987a Make Khoj email sender configurable for all email variants
The welcome, feedback and automation emails were still using the Khoj
domain, which wouldnt work for self-hosted users with their RESEND key

Resolves #1004
2024-12-13 12:25:23 -08:00
sabaimran
f1fb4525c6 Remove old images 2024-12-13 11:31:14 -08:00
sabaimran
d5681ad1a2 Update image assets to sign up prompt 2024-12-13 11:30:14 -08:00
sabaimran
62545a9af3 Update package path for pypi ci export 2024-12-12 21:25:21 -08:00
sabaimran
e74e922cea Update file path of python installation 2024-12-12 16:50:32 -08:00
sabaimran
144f283a25 Maintain old login page for posterity and associated API 2024-12-12 16:23:44 -08:00
sabaimran
4697daeb1a Improve opengraph metadata across front-end pages 2024-12-12 15:56:43 -08:00
sabaimran
dfc150c442 Merge branch 'master' of github.com:khoj-ai/khoj into features/new-sign-in-page 2024-12-12 15:43:06 -08:00
sabaimran
ad3f8a33d1 Add a static login footer that prompts login, disable input box without auth 2024-12-12 14:57:52 -08:00
Debanjum
2db7a1ca6b Restart code sandbox on crash in eval github workflow (#1007)
See
e3fed3750b
for corresponding change to use pm2 to auto-restart code sandbox
2024-12-12 14:32:03 -08:00
Debanjum
12c976dcb2 Track usage costs from DeepInfra OpenAI compatible API 2024-12-12 14:20:34 -08:00
Debanjum
b0abec39d5 Use chat model name var name consistently and type openai chat utils 2024-12-12 14:20:34 -08:00
Debanjum
4915be0301 Fix initializing chat model names parameter after field rename in #1003 2024-12-12 14:20:33 -08:00
sabaimran
a7d0ed8670 Add carousel for navigating images in the sign up modal 2024-12-12 11:47:41 -08:00
Debanjum
9eb863e964 Restart code sandbox on crash in eval github workflow 2024-12-12 11:28:54 -08:00
Debanjum
01bc6d35dc Rename Chat Model Options table to Chat Model as short & readable (#1003)
- Previous was incorrectly plural but was defining only a single model
- Rename chat model table field to name
- Update documentation
- Update references functions and variables to match new name
2024-12-12 11:24:16 -08:00
sabaimran
943065b7b3 Remove dead dependencies and improve the google sign in button 2024-12-12 11:19:19 -08:00
sabaimran
41bb1e60d0 Use the LoginPrompt in the chat history side panel 2024-12-11 22:56:03 -08:00
sabaimran
b60b750555 Update the styling to align with Google branding via the sign in button
- Disable the gsi client side code since it's being finnicky and inconsistent
2024-12-11 22:49:11 -08:00
sabaimran
0f8b055b42 Improve padding for space, esp in mobile 2024-12-11 18:22:48 -08:00
sabaimran
142239d2c9 Add mobile friendliness and replace the login page redirects 2024-12-11 18:01:04 -08:00
sabaimran
de6ed2352a Break up the parts of the login dialog into smaller modules to extend for mobile 2024-12-11 17:18:43 -08:00
sabaimran
d35db99c6f Initial version of a carousel working for sign in with steps for email sign up
- Google sign in is pending with the gsi client code. Will see if I can get that working
- Check in relevant image assets
2024-12-11 16:54:06 -08:00
aditya218
9be26e1bd2 Fix documentation to point to local environment image_generation.md (#1005)
Fix documentation to point to local environment.
2024-12-11 16:12:25 -08:00
sabaimran
530b44cf56 Merge branch 'master' of github.com:khoj-ai/khoj into features/new-sign-in-page 2024-12-11 10:30:13 -08:00
Debanjum
fe09df66b7 Make code sandbox container url accessible to Khoj container in docker compose 2024-12-11 01:14:26 -08:00
Debanjum
59008ae90e Use buildx to create multi platform docker image 2024-12-11 00:21:29 -08:00
Debanjum
ec797bc6b8 Build docker imgs on native arch runners to avoid manifest list error
This also avoids the need to use --amend and annotate steps when
creating the multi-arch docker images
2024-12-10 23:16:36 -08:00
Debanjum
5f7b13df2d Fix new docker tags in workflow to not include forward slashes 2024-12-10 22:55:33 -08:00
Debanjum
ba6237b5c0 Fix to create multi-arch builds. Stop docker image overwrites in workflow 2024-12-10 21:08:17 -08:00
sabaimran
44ede26e67 Temporarily disable cloud arm builds while we disambiguate the build issues 2024-12-10 20:00:59 -08:00
Debanjum
33a5efaf4b Fix undefined variable exception during openai provider setup on init
Resolves #1001
2024-12-10 19:54:00 -08:00
sabaimran
e43341fdcc Release Khoj version 1.31.0 2024-12-10 19:41:31 -08:00
Debanjum
a757ecfd2b Put the generated assets message after the user query and fix prompt 2024-12-10 19:40:13 -08:00
Debanjum
40e4f2ec2e Reduce clutter in chat message ux on Obsidian
- Move khoj message border to left like in web ui
- Remove user, sender emojis and name
- Ensure conversation title always at top of chat sessions view,
  even if no chat sessions loaded yet, instead of causing layout shift
  on chat sessions load
2024-12-10 19:34:17 -08:00
sabaimran
1ec1eff57e Improve mobile header, reduce title bar padding and add conv title 2024-12-10 19:03:57 -08:00
sabaimran
321eeeaed7 Fix setting title of shared conversation, move shared button into the title pane 2024-12-10 18:19:46 -08:00
sabaimran
d7e5a76ace Add an icon in the input bar for research mode 2024-12-10 17:49:25 -08:00
sabaimran
01d000e570 Merge pull request #1002 from khoj-ai/features/improve-multiple-output-mode-generation
Improve handling of multiple output modes

- Use the generated descriptions / inferred queries to supply context to the model about what it's created and give a richer response
- Stop sending the generated image in user message. This seemed to be confusing the model more than helping.
- Collect generated assets in a structured objects to provide model context. This seems to help it follow instructions and separate responsibility better
- Also, rename the open ai converse method to converse_openai to follow patterns with other providers
2024-12-10 17:06:19 -08:00
sabaimran
2bb14c55a6 Merge branch 'master' of github.com:khoj-ai/khoj into features/improve-multiple-output-mode-generation 2024-12-10 16:56:36 -08:00
sabaimran
6c8007e23b Improve handling of multiple output modes
- Use the generated descriptions / inferred queries to supply context to the model about what it's created and give a richer response
- Stop sending the generated image in user message. This seemed to be confusing the model more than helping.
- Also, rename the open ai converse method to converse_openai to follow patterns with other providers
2024-12-10 16:54:21 -08:00
Debanjum
4bc5c1357a Upgrade server, documentation dependencies. Spell fix docker-compose.yml 2024-12-10 15:47:47 -08:00
Debanjum
f8957e52bf Keep chatml message content simple for wider compat unless attachments
This allows for wider compatibility with chat models and openai proxy
ai model apis that expect message content to be string format, not
objects.
2024-12-10 00:10:56 -08:00
sabaimran
4b4e0e20d4 Make the version number a badge, rather than an independent item in the nav dropdown 2024-12-09 14:45:26 -08:00
sabaimran
eb36492ba5 Update handling of images when included in the chat history with assistant message 2024-12-08 21:46:07 -08:00
Debanjum
b660c494bc Use recognizable DB model names to ease selection UX on Admin Panel
Previously id were used (by default) for model display strings.
This made it hard to select chat model options, server chat settings
etc. in the admin panel dropdowns.

This change uses more recognizable names for the DB objects to ease
selection in dropdowns and display in general on the admin panel.
2024-12-08 20:34:50 -08:00
Debanjum
d10dc9cfe1 Inform code tool AI only limited python packages are available to it
Reduce code tool failing with module not found errors
2024-12-08 20:34:50 -08:00
Debanjum
3fd8614a4b Only auto load available chat models from Ollama provider for now
Allowing models from any openai proxy service makes it too unwieldy.
And a bunch of them do not even support this endpoint.
2024-12-08 20:34:50 -08:00
sabaimran
2c934162d3 Add a data filter for privacy_level of agents 2024-12-08 19:55:57 -08:00
sabaimran
3b9f4c4356 Correct negative for running prod image locally 2024-12-08 19:55:35 -08:00
Debanjum
9dd3782f5c Rename OpenAIProcessorConversationConfig DB model to more apt AiModelApi (#998)
* Rename OpenAIProcessorConversationConfig to more apt AiModelAPI

The DB model name had drifted from what it is being used for,
a general chat api provider that supports other chat api providers like
anthropic and google chat models apart from openai based chat models.

This change renames the DB model and updates the docs to remove this
confusion.

Using Ai Model Api we catch most use-cases including chat, stt, image generation etc.
2024-12-08 18:02:29 -08:00
sabaimran
df66fb23ab Centralize definition of the content security policy and add in-app chat
- in-app chat is meant for support requests and currently is only in the settings page, where users are most likely to go if confused IMO
2024-12-08 17:57:27 -08:00
sabaimran
0b87c13f8d Add khoj_version to the settings menu 2024-12-08 17:55:56 -08:00
sabaimran
47a087c73b Fix chatwoot import issue by checking whether we're in an execution environment before loading the script 2024-12-08 17:16:20 -08:00
sabaimran
66f59c8d41 Add Chatwoot to documentation
See repo: https://github.com/chatwoot/chatwoot
2024-12-08 16:51:43 -08:00
sabaimran
6ed051d631 Merge pull request #994 from khoj-ai/features/update-desktop-app
Simplify the desktop app

- Make the desktop app mainly a file-syncing client for users who have lots of documents that they need to share with Khoj. This is because the web app provides a fairly robust chat client which can be used by anyone on their computer.
- The chat client in the desktop app had significantly drifted from our current brand / them, and didn't provide enough value add to update. Later, we will make it easier to install the existing web app as a desktop PWA.
2024-12-08 15:05:35 -08:00
sabaimran
05b3911080 Update some button titles and add descriptions for clarity 2024-12-08 14:29:12 -08:00
sabaimran
b78b92d6a0 Merge branch 'master' of github.com:khoj-ai/khoj into features/update-desktop-app 2024-12-08 14:20:20 -08:00
sabaimran
e3789aef49 Merge pull request #992 from khoj-ai/features/allow-multi-outputs-in-chat
Currently, Khoj has terminal states with respect to what assets it outputs. We limit it to image, text, and excalidraw diagram. This limitation is unnecessary and provides undue constraints for creating more dynamic, diverse experiences. For instance, we may want the chat view to morph for document editing or generation, in which case having limited output modes would be a detriment.

This change allows us to emit generated assets and then continue on to more text generation in final response. It forces text response for all messages. It adds a new stream event, GENERATED_ASSETS, which holds content that the AI is emitting in response to the query.
2024-12-08 14:19:05 -08:00
sabaimran
a2251f01eb Make result optional for code context, relevant when code execution was unsuccessful 2024-12-08 13:27:33 -08:00
sabaimran
9c403d24e1 Fix reference to directory in the eval workflow for starting terrarium 2024-12-08 13:03:05 -08:00
sabaimran
7cd2855146 Make attributes optional in the knowledge graph model 2024-12-08 12:23:17 -08:00
sabaimran
2af687d1c5 Allow snippetHighlighted to also be nullable 2024-12-08 11:51:24 -08:00
sabaimran
efa23a8ad8 Update validation requirements for online searches 2024-12-08 11:30:17 -08:00
sabaimran
6940c6379b Add sudo when running installations in order to install relevant packages
add --legacy-peer-deps temporarily to see if it helps mitigate the issue
2024-12-08 11:11:13 -08:00
sabaimran
4c4b7120c6 Use Khoj terrarium fork instead of building from official Cohere repo 2024-12-08 11:06:33 -08:00
sabaimran
a138845fea Merge branch 'master' of github.com:khoj-ai/khoj into features/allow-multi-outputs-in-chat 2024-12-08 10:57:16 -08:00
sabaimran
19832a3ed0 Add note to uncomment line when using the prod image 2024-12-05 18:16:01 -08:00
sabaimran
110c64ba27 Update the desktop instructions 2024-12-05 17:39:16 -08:00
Debanjum
65c5b163c9 Add khoj_lantern svg to web public assets for use by new admin panel 2024-12-05 10:57:18 -08:00
Debanjum
354dc12b3b Style the Admin Panel with a modern theme and Khoj branding (#999)
Overview
- The default django admin panel UI looks pretty dated and didn't
  have any Khoj specific branding
- Used the Unfold Django admin panel theme for a modern look
- Used the Khoj logo and name in Admin panel title, headings, favicons

Details:
All models shown on Admin panel need to inherit from unfold's
ModelAdmin to get styling applied. So

- Make all models on Admin panel inherit from unfold's ModelAdmin
- Subclassed UserAdmin to inherit from unfold's ModelAdmin
- Deregistered the unused Auth Group model from the Admin panel
  We can add it back when its actually used. Avoid confusion for now
- Explicitly register DjangoJobExecution on admin panel and again make
  it inherit from the unfold.admin.ModelAdmin
2024-12-04 23:53:43 -08:00
Matias Forbord
9cc79c0fb7 Fix broken doc links to query filters from emacs docs page (#1000)
* docs: repair query filters link

* change docs link to be relative
2024-12-04 23:52:27 -08:00
sabaimran
8953ac03ec Rename additional context for llm response to program execution context 2024-12-04 18:43:41 -08:00
sabaimran
886fe4a0c9 Merge branch 'master' of github.com:khoj-ai/khoj into features/allow-multi-outputs-in-chat 2024-12-03 21:37:00 -08:00
sabaimran
df5e34615a Fix processing of images field when construct chat messages 2024-12-03 21:26:55 -08:00
sabaimran
3552032827 Rename additional context to additional_context_for_llm_response 2024-12-03 21:23:15 -08:00
sabaimran
d507894546 Simplify the desktop app
- Make the desktop app mainly a file-syncing client for users who have lots of documents that they need to share with Khoj. This is because the web app provides a fairly robust chat client which can be used by anyone on their computer.
- The chat client in the desktop app had significantly drifted from our current brand / them, and didn't provide enough value add to update. Later, we will make it easier to install the existing web app as a desktop PWA.
2024-12-02 15:54:05 -08:00
Debanjum
9f7cb335c5 Create Android app for Khoj (#991)
Use bubblewrap to publish the Khoj Progressive Web App (PWA) as a Trusted Web Activity (TWA) to the Android Play Store
2024-12-02 15:45:04 -08:00
Debanjum
ee28d7f125 Fix Android Studio build warnings by using newer gradle, mavenCentral 2024-12-02 12:48:33 -08:00
Debanjum
5d6fb07066 Fix app icons, orientation. Improve name, id, description in webmanifest
- Use bubblewrap generated splash screen, notification icons from
  1200x1200 high res khoj icon in assets.khoj.dev.
- Discard bubblewrap generated launcher icons.
- Fix orientation to respect phone orientation settings. "any" was not.
2024-12-02 12:48:25 -08:00
Debanjum
147c8e9115 Release v3 with high-res splash screen. More details in web, app manifest
- Add 512, 192 Khoj maskable icons to web app manifest for android rendering
- Add id, categories etc suggested by pwabuilder
- Use higher quality icon images for splash screen than what
  bubblewrap creates by default
2024-12-02 11:28:35 -08:00
Debanjum
d333e10e64 Encode request params as utf-8 to fix multibyte char error in khoj.el
Encode api key in header, POST request body and GET query param for
search as utf-8 to avoid the multibyte char in request issue when
making API calls from khoj.el to khoj server.

Resolves #935
2024-12-02 02:00:14 -08:00
Debanjum
db29894038 Do not wrap filepath in Path to fix indexing markdown files on Windows (#993)
### Issue
- Path with / are converted to \\ on Windows using the `Path' operator.
- The `markdown_to_entries' module was trying to normalize file paths with`Path'  for some reason.
  This would store the file paths in DB Entry differently than the file to entries map if Khoj ran on Windows.
  That'd result in a KeyError when trying to look up the entry file path from `file_to_text_map' in the `text_to_entries:update_embeddings()' function.

### Fix
- Removing the unnecessary OS dependent Path normalization in `markdown_to_entries' should keep the file path storage consistent across `file_to_text_map' var, `FileObjectAdaptor', `Entry' DB tables on Windows for Markdown files as well.

This issue will affect users hosting Khoj server on Windows and attempting to index markdown files.

Resolves #984
2024-12-02 01:02:58 -08:00
Debanjum
47c926b0ff Add more typing to org|md_to_entries. Remove redundant f-string wraps
- Add type hints to improve maintainability of stabilzed indexing code
- It shouldn't be necessary to wrap string variables in an f-string

This change aims to improve code quality. It should not affect
functionality.
2024-12-01 23:02:52 -08:00
Debanjum
dffdd81345 Do not wrap filepath in Path to fix indexing markdown files on Windows
Issue
- Path with / are converted to \\ on Windows using the Path operator.
- The markdown to entries method for some reason was doing this.
  This would store the file paths in DB entry differently than the file
  to entries map. Resulting in a KeyError when trying to look up the
  entry file path from file_to_text_map in the
  text_to_entries:update_embeddings() function.

Fix
- Removing the unnecessary OS dependendent Path normalization in
  markdown_to_entries should keep the file path storage consistent
  across file_to_text_map var, FileObjectAdaptor, Entry DB tables on
  Windows for Markdown files as well

This issue would only affect users hosting Khoj server on Windows and
attempting to index markdown files.

Resolves #984
2024-12-01 23:00:31 -08:00
sabaimran
c87fce5930 Add a migration to use the new image storage format for past conversations
- Added it to the Django migrations so that it auto-triggers when someone updates their server and starts it up again for the first time. This will require that they update their clients as well in order to view/consume image content.
- Remove server-side references in the code that allow to parse the text-to-image intent as it will no longer be necessary, given the chat logs will be migrated
2024-12-01 18:35:31 -08:00
Debanjum
9e0a2c7a98 Restrict generated chat title to 200 chars limit allowed for chat slug 2024-11-30 19:12:03 -08:00
Debanjum
8b8e2be82d Only create subscription object when it does not exist for user
This avoid unnecessarily throwing an internal server error when the
user tries to sign-up using multiple mechanisms (e.g first by email, then
by google oauth)
2024-11-30 19:08:34 -08:00
Debanjum
fc6be543bd Improve GPQA eval prompt to imrpove parsing answer from Khoj response 2024-11-30 17:21:09 -08:00
sabaimran
00f48dc1e8 If in the new images format, show the response text in obsidian instead of the inferred query 2024-11-30 14:39:51 -08:00
sabaimran
224abd14e0 Only add the image_url to the constructed chat message if it is a url 2024-11-30 14:39:27 -08:00
sabaimran
991577aa17 Allow a None turnId to accommodate historic chat messages 2024-11-30 14:39:08 -08:00
sabaimran
a539761c49 Fix processing of excalidrawdiagram in json response chunking 2024-11-30 12:35:13 -08:00
sabaimran
dc4a9ee3e1 Ensure that the generated assets are maintained in the chat window after streaming is completed. 2024-11-30 12:31:20 -08:00
sabaimran
e3aee50cf3 Fix parsing of generated_asset response 2024-11-29 18:41:53 -08:00
sabaimran
2b32f0e80d Remove commented out code blocks 2024-11-29 18:11:50 -08:00
sabaimran
df855adc98 Update response handling in Obsidian to work with new format 2024-11-29 18:10:47 -08:00
sabaimran
512cf535e0 Collapse train of thought when completed during live stream 2024-11-29 18:10:35 -08:00
sabaimran
a0b00ce4a1 Don't include null attributes when filling in stored conversation metadata
- Prompt adjustments to indicate to LLM what context it has
2024-11-29 18:10:14 -08:00
sabaimran
c5329d76ba Merge branch 'master' of github.com:khoj-ai/khoj into features/allow-multi-outputs-in-chat 2024-11-29 14:12:03 -08:00
sabaimran
46f647d91d Improve image rendering for khoj generated images. FIx typing of stored excalidraw image. 2024-11-29 14:11:48 -08:00
Debanjum
fdf69b7049 Publish second version with new upload key 2024-11-28 22:04:10 -08:00
Debanjum
faf15072b6 Create first version of Khoj Android app from PWA using Bubblewrap 2024-11-28 22:04:10 -08:00
sabaimran
4f6d1211ba Fix additional context type in anthropic chat 2024-11-28 20:16:36 -08:00
sabaimran
6f408948d3 Fix typing of generated_fiels parameters 2024-11-28 20:15:10 -08:00
sabaimran
439b18c21f Release Khoj version 1.30.10 2024-11-28 19:43:06 -08:00
sabaimran
2dfd163430 Add more explicity run strategies in the runner matrix 2024-11-28 19:31:34 -08:00
sabaimran
80cd902c86 Since linux/amd64 images aren't being created, try setting a custom description on the image
Refer to this GH documentation on working with multi arch images in the container registry:
https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#adding-a-description-to-multi-arch-images
2024-11-28 19:14:06 -08:00
sabaimran
40d8a7a581 Release Khoj version 1.30.9 2024-11-28 18:45:50 -08:00
sabaimran
87aa653c7f Add additional steps in prod.Dockerfile to ensure dependencies are copied over 2024-11-28 18:37:08 -08:00
sabaimran
d91935c880 Initial commit of a functional but not yet elegant prototype for this concept 2024-11-28 17:28:23 -08:00
Debanjum
a552543f4f Use json5 to parse llm generated questions to query docs and web
json5 is more forgiving, handles double quotes, newlines in raw json
string
2024-11-28 14:35:34 -08:00
Debanjum
0a69af4f61 Update to latest ToDesktop runtime 2024-11-28 13:56:14 -08:00
Debanjum
1d0fe141dc Release Khoj version 1.30.8 2024-11-28 13:37:30 -08:00
Debanjum
29e801c381 Add MATH500 dataset to eval
Evaluate simpler MATH500 responses with gemini 1.5 flash

This improves both the speed and cost of running this eval
2024-11-28 12:48:25 -08:00
Debanjum
22aef9bf53 Add GPQA (diamond) dataset to eval 2024-11-28 12:48:25 -08:00
Debanjum
f1190ccf32 Improve parsing complex json strings returned by LLM (#989)
- Improve escaping to load complex json objects
- Fallback to a more forgiving [json5](https://json5.org/) loader if `json.loads` cannot parse complex json str

This should reduce failures to pick research tool and run code by agent
2024-11-28 11:01:39 -08:00
Debanjum
8c120a5139 Fallback to json5 loader if json.loads cannot parse complex json str
JSON5 spec is more flexible, try to load using a fast json5 parser if
the stricter json.loads from the standard library can't load the
raw complex json string into a python dictionary/list
2024-11-26 21:17:00 -08:00
Debanjum
70b7e7c73a Improve load of complex json objects. Use it to pick tool, run code
Gemini doesn't work well when trying to output json objects. Using it
to output raw json strings with complex, multi-line structures
requires more intense clean-up of raw json string for parsing
2024-11-26 17:37:57 -08:00
Debanjum
8cb0db0051 Fix llama-cpp-python install by pytest github workflow
- Use pre-built wheels for torch and llama-cpp-python
- Install and link musl as it's used by llama-cpp-python pre-built
  wheel instead of glibc
- Join Install git and Install Dependencies steps in pytest workflow
  To remove unnecessary steps
2024-11-26 02:04:36 -08:00
Debanjum
29315f44e7 Add assetlinks.json to link android app to app.khoj.dev domain
Add sha cert of android upload, signing keys to open debug, prod apps
as TWA in fullscreen on android phones
2024-11-26 01:57:54 -08:00
Debanjum
a97a45bf20 Align agent personality with recently updated khoj personality
See update to Khoj personality in commit
6eb59464da
2024-11-26 00:06:16 -08:00
Debanjum
e088fcbc7b Build for arm64 on arm64 runner. Parallelize arm64, x64 docker builds
- Building arm64 image on an ubuntu arm64 runner reduces `yarn build'
  step time by 75% from 12mins to 3mins.
  - This is because no QEMU emulation for arm64 on x86 is required now
- Parallelizing x64 and arm64 platform builds halves build time on top
  - Revert to use standard ubuntu-latest runner as large x64 runner
    doesn't give much more speed improvements

This results an effective additional 50%-66% reduction in build time
on top of #987.

So a full dockerize workflow run now takes *10 mins* vs previous 35+mins.
This is a total of *72% improvement* in max dockerize run time.

Get additional speed improvements when docker layer cache hit.
2024-11-24 23:18:55 -08:00
Debanjum
5723a3778e Speed up Docker image builds using multi-stage parallel pipelines (#987)
## Objective
Improve build speed and size of khoj docker images

## Changes
### Improve docker image build speeds
  - Decouple web app and server build steps
  - Build the web app and server in parallel
  - Cache docker layers for reuse across dockerize github workflow runs
    - Split Docker build layers for improved cacheability (e.g separate `yarn install` and `yarn build` steps)
### Reduce size of khoj docker images 
  - Use an up-to-date `.dockerignore` to exclude unnecessary directories
  - Do not installing cuda python packages for cpu builds
### Improve web app builds
  - Use consistent mechanism to get fonts for web app
  - Make tailwind extensions production instead of dev dependencies
  - Make next.js create production builds for the web app (via `NODE_ENV=production` env var)
2024-11-24 21:49:46 -08:00
Debanjum
4a5646c8da Cache docker layers, nextjs builds in dockerize github workflow 2024-11-24 21:06:22 -08:00
Debanjum
6a39651ad3 Standardize loading fonts locally across pages on web app 2024-11-24 20:41:15 -08:00
sabaimran
9368699b2c Migrate the pre-commit config 2024-11-24 14:54:26 -08:00
sabaimran
6eb59464da Add additional reinforcement to coax gemini into giving a minimum helpful response 2024-11-24 14:53:53 -08:00
sabaimran
15f062b34a Remove print statement for agent style map 2024-11-24 14:53:53 -08:00
sabaimran
d7e68a2d1b Wait for iplcodata to load before first message
- Fix the console khoj ai ascii art
- Remove some not so good suggested prompt
2024-11-24 14:53:53 -08:00
Debanjum
f51e0f7859 Make Next.js create production builds of web app for Docker images 2024-11-24 13:59:40 -08:00
Debanjum
710e00ad9e Make tailwind extensions prod, instead of dev, deps of web app 2024-11-24 13:59:40 -08:00
Debanjum
4b486ea5f6 Exclude unnecessary directories from final docker builds 2024-11-24 13:59:40 -08:00
Debanjum
78d8ca49ec Skip Nvidia GPU python packages during Server install in Dockerfiles 2024-11-24 13:59:39 -08:00
Debanjum
37887a175a Speed up Docker image builds using multi-stage parallel pipelines
Decouple web app, server builds in parallel to speed up Docker
builds
2024-11-24 12:48:30 -08:00
Debanjum
7c77d65d35 Improve logic to disable telemetry via KHOJ_TELEMETRY_DISABLE env var
The newly added KHOJ_TELEMETRY_DISABLE env var knob to disable
telemetry should override old config mechanism when set
2024-11-24 00:54:16 -08:00
sabaimran
2d683898c2 Release Khoj version 1.30.7 2024-11-23 22:51:10 -08:00
sabaimran
914ff994f7 Fix cost addition to chat_metadata 2024-11-23 22:50:45 -08:00
Debanjum
caaa127dcf Release Khoj version 1.30.6 2024-11-23 21:07:00 -08:00
Debanjum
57b8273002 Fix apt install for musl-dev in prod.Dockerfile 2024-11-23 21:06:09 -08:00
Debanjum
8f966b11ec Release Khoj version 1.30.5 2024-11-23 20:49:05 -08:00
Debanjum
498895a47d Fix libmusl error using pre-built llama-cpp-python wheel in prod Docker 2024-11-23 20:47:41 -08:00
Debanjum
e5b211a743 Release Khoj version 1.30.4 2024-11-23 19:48:21 -08:00
Debanjum
9848d89d03 Try build docker images with github high cpu, ram runner 2024-11-23 19:09:36 -08:00
Debanjum
04bb3d6f15 Fix libmusl error using pre-built llama-cpp-python wheel via Docker
Seems like llama-cpp-python pre-built wheels need libmusl. Otherwise
you run into runtime errors on Khoj startup via Docker.
2024-11-23 18:46:44 -08:00
Debanjum
8dd2122817 Set sample size to 200 for automated eval runs as well 2024-11-23 14:48:38 -08:00
Debanjum
c4ef31d86f Release Khoj version 1.30.3 2024-11-23 14:40:06 -08:00
Debanjum
15ae22bdcf Use pre-built llama-cpp-python wheel in Khoj docker images
Reduces build time and resolves FileNotFoundError 'ninja' during
llama-cpp-python local build.
2024-11-23 14:38:07 -08:00
sabaimran
4ac49ca90f Release Khoj version 1.30.2 2024-11-23 12:00:28 -08:00
sabaimran
eb1b21baaa Add a new sign in modal that is triggered from the login prompt screen, rather than redirecting user to another screen to sign in 2024-11-23 11:55:34 -08:00
Debanjum
5aa5cb1941 Add "New" section with latest updates to Readme 2024-11-23 01:36:50 -08:00
sabaimran
7f5bf35806 Disambiguate renewal_date type. Previously, being used as None, False, and Datetime in different places. 2024-11-22 12:06:20 -08:00
sabaimran
5e8c824ecc Improve the experience for finding past conversation
- add a conversation title search filter, and an agents filter, for finding conversations
- in the chat session api, return relevant agent style data
2024-11-22 12:03:01 -08:00
sabaimran
a761865724 Fix handling of customer.subscription.updated event to process new renewal end date 2024-11-22 12:03:01 -08:00
sabaimran
6a054d884b Add quicker/easier filtering on auth 2024-11-22 12:03:01 -08:00
Debanjum
b9a889ab69 Fix Khoj responses when code generated charts in response context
The current fix should improve Khoj responses when charts in response
context. It truncates code context before sharing with response chat actors.

Previously Khoj would respond with it not being able to create chart
but than have a generated chart in it's response in default mode.

The truncate code context was added to research chat actor for
decision making but it wasn't added to conversation response
generation chat actors.

When khoj generated charts with code for its response, the images in
the context would exceed context window limits.

So the truncation logic to drop all past context, including chat
history, context gathered for current response.

This would result in chat response generator 'forgetting' all for the
current response when code generated images, charts in response context.
2024-11-21 14:43:52 -08:00
Debanjum
5475a262d4 Move truncate code context func for reusability across modules
It needs to be used across routers and processors. It being in
run_code tool makes it hard to be used in other chat provider contexts
due to circular dependency issues created by
send_message_to_model_wrapper func
2024-11-21 14:27:39 -08:00
Debanjum
f434c3fab2 Fix toggling prompt tracer on/off in Khoj via PROMPTRACE_DIR env var
Previous changes to depend on just the PROMPTRACE_DIR env var instead
of KHOJ_DEBUG or verbosity flag was partial/incomplete.

This fix adds all the changes required to only depend on the
PROMPTRACE_DIR env var to enable/disable prompt tracing in Khoj.
2024-11-21 14:06:00 -08:00
Debanjum
4a40cf79c3 Add docs on how to cross-device access self-hosted khoj using tailscale 2024-11-21 11:07:18 -08:00
Debanjum
1f96c13f72 Enable starting khoj uvicorn server with ssl cert file, key for https
Pass your domain cert files via the --sslcert, --sslkey cli args.
For example, to start khoj at https://example.com, you'd run command:

KHOJ_DOMAIN=example.com khoj --sslcert example.com.crt --sslkey
example.com.key --host example.com

This sets up ssl certs directly with khoj without requiring a
reverse proxy like nginx to serve khoj behind https endpoint for
simple setups. More complex setups should, of course, still use a
reverse proxy for efficient request processing
2024-11-21 11:07:18 -08:00
sabaimran
9fea02f20f In telemetry, differentiate create_user google and email 2024-11-21 11:01:37 -08:00
sabaimran
9db885b5f7 Limit access to chat models to futurist users 2024-11-21 07:53:24 -08:00
sabaimran
7a00a07398 Add trailing slash to Ollama url in docs 2024-11-21 07:48:18 -08:00
sabaimran
3519dd76f0 Fix type of excalidraw image response 2024-11-20 19:01:13 -08:00
sabaimran
467de76fc1 Improve the image diagramming prompts and response parsing 2024-11-20 18:59:40 -08:00
Debanjum
50d8405981 Enable khoj to use terrarium code sandbox as tool in eval workflow 2024-11-20 14:19:27 -08:00
Debanjum
2203236e4c Update desktop app dependencies 2024-11-20 13:05:55 -08:00
Debanjum
409204917e Update documentation website dependencies 2024-11-20 13:05:32 -08:00
Debanjum
6f1adcfe67 Track Usage Metrics in Chat API. Track Running Cost, Accuracy in Evals (#985)
- Track, return cost and usage metrics in chat api response
  Track input, output token usage and cost of interactions with 
  openai, anthropic and google chat models for each call to the khoj chat api
- Collect, display and store costs & accuracy of eval run currently in progress
  This provides more insight into eval runs during execution 
  instead of having to wait until the eval run completes.
2024-11-20 12:59:44 -08:00
Debanjum
ffbd0ae3a5 Fix eval github workflow to run on releases, i.e on tags push 2024-11-20 12:57:42 -08:00
Debanjum
ed364fa90e Track running costs & accuracy of eval runs in progress
Collect, display and store running costs & accuracy of eval run.

This provides more insight into eval runs during execution instead of
having to wait until the eval run completes.
2024-11-20 12:40:51 -08:00
Debanjum
bbd24f1e98 Improve dropdown menus on web app setting page with scroll & min-width
- Previously when settings list became long the dropdown height would
  overflow screen height. Now it's max height is clamped and  y-scroll
- Previously the dropdown content would take width of content. This
  would mean the menu could sometimes be less wide than the button. It
  felt strange. Now dropdown content is at least width of parent button
2024-11-20 12:27:13 -08:00
Debanjum
c53c3db96b Track, return cost and usage metrics in chat api response
- Track input, output token usage and cost for interactions
  via chat api with openai, anthropic and google chat models

- Get usage metadata from OpenAI using stream_options
- Handle openai proxies that do not support passing usage in response

- Add new usage, end response  events returned by chat api.
  - This can be optionally consumed by clients at a later point
  - Update streaming clients to mark message as completed after new
    end response event, not after end llm response event
- Ensure usage data from final response generation step is included
  - Pass usage data after llm response complete. This allows gathering
    token usage and cost for the final response generation step across
    streaming and non-streaming modes
2024-11-20 12:17:58 -08:00
Debanjum
80df3bb8c4 Enable prompt tracing only when PROMPTRACE_DIR env var set
Better to decouple prompt tracing from debug mode or verbosity level
and require explicit, independent config to enable prompt tracing
2024-11-20 11:54:02 -08:00
Debanjum
9ab76ccaf1 Skip adding agent to chat metadata when chat unset to avoids null ref 2024-11-19 21:10:23 -08:00
Debanjum
4da0499cd7 Stream responses by openai's o1 model series, as api now supports it
Previously o1 models did not support streaming responses via API. Now
they seem to do
2024-11-19 21:10:23 -08:00
sabaimran
e5347dac8c Fix base image used for prod in docs 2024-11-19 15:51:27 -08:00
sabaimran
b943069577 Fix button text, and login url in self-hosted auth docs 2024-11-19 15:50:13 -08:00
sabaimran
3b5e6a9f4d Update authentication documentation 2024-11-19 15:45:47 -08:00
Debanjum
7bdc9590dd Fix handling sources, output in chat actor when is automated task
Remove unnecessary ```python prefix removal. It isn't being triggered
in json deserialize path.
2024-11-19 13:49:27 -08:00
Debanjum
0e7d611a80 Remove ```python codeblock prefix from raw json before deserialize 2024-11-19 12:53:52 -08:00
Debanjum
001c13ef43 Upgrade web app package dependencies 2024-11-19 12:53:52 -08:00
sabaimran
4f5c1eeded Update some of the open graph data for the documentation website 2024-11-19 11:14:46 -08:00
sabaimran
5134d49d71 Release Khoj version 1.30.1 2024-11-18 17:30:33 -08:00
sabaimran
8bdd0b26d3 And a connections clean up decorator to all scheduled tasks 2024-11-18 17:19:36 -08:00
Debanjum
817601872f Update default offline models enabled 2024-11-18 16:38:17 -08:00
Debanjum
45c623f95c Dedupe, organize chat actor, director tests
- Move Chat actor tests that were previously in chat director tests file
- Dedupe online, offline io selector chat actor tests
2024-11-18 16:10:50 -08:00
Debanjum
2a76c69d0d Run online, offine chat actor, director tests for any supported provider
- Previously online chat actors, director tests only worked with openai.
  This change allows running them for any supported onlnie provider
  including Google, Anthropic and Openai.

- Enable online/offline chat actor, director in two ways:
  1. Explicitly setting KHOJ_TEST_CHAT_PROVIDER environment variable to
     google, anthropic, openai, offline
  2. Implicitly by the first API key found from openai, google or anthropic.

- Default offline chat provider to use Llama 3.1 3B for faster, lower
  compute test runs
2024-11-18 15:11:37 -08:00
Debanjum
653127bf1d Improve data source, output mode selection
- Set output mode to single string. Specify output schema in prompt
  - Both thesee should encourage model to select only 1 output mode
    instead of encouraging it in prompt too many times
  - Output schema should also improve schema following in general
- Standardize variable, func name of io selector for readability
- Fix chat actors to test the io selector chat actor
- Make chat actor return sources, output separately for better
  disambiguation, at least during tests, for now
2024-11-18 15:11:37 -08:00
Debanjum
e3fd51d14b Pass user arg to create title from query in new automation flow 2024-11-18 12:58:10 -08:00
Debanjum
9e74de9b4f Improve serializing conversation JSON to print messages on console
- Handle chatml message.content with non-json serializable data like
  WebP image binary data used by Gemini models
2024-11-18 12:57:05 -08:00
sabaimran
3f70d2f685 Add more graceful exception handling when tool selection doesn't work 2024-11-18 09:34:49 -08:00
Debanjum
a2ccf6f59f Fix github workflow to start Khoj, connect to PG and upload results
- Do not trigger tests to run in ci on update to evals
2024-11-18 04:25:15 -08:00
Debanjum
7c0fd71bfd Add GitHub workflow to quiz Khoj across modes and specified evals (#982)
- Evaluate khoj on random 200 questions from each of google frames and openai simpleqa benchmarks across *general*, *default* and *research* modes
- Run eval with Gemini 1.5 Flash as test giver and Gemini 1.5 Pro as test evaluator models
- Trigger eval workflow on release or manually
- Make dataset, khoj mode and sample size configurable when triggered via manual workflow
- Enable Web search, webpage read tools during evaluation
2024-11-18 02:19:30 -08:00
sabaimran
f75085dc7a Release Khoj version 1.30.0 2024-11-17 21:36:22 -08:00
sabaimran
c72813ba67 Merge pull request #981 from rznzippy/bugfix/980/database-connections-leakage
Fix database connections leakage (#980)
2024-11-17 21:01:06 -08:00
sabaimran
7d50c6590d Merge pull request #977 from khoj-ai/features/improve-tool-selection
- JSON extract from LLMs is pretty decent now, so get the input tools and output modes all in one go. It'll help the model think through the full cycle of what it wants to do to handle the request holistically.
- Make slight improvements to tool selection indicators
2024-11-17 20:08:19 -08:00
sabaimran
282f47e0d6 Add Jina documentation to readme for self-hosting 2024-11-17 17:20:28 -08:00
Debanjum
48567fd468 Do not erase partial message when generation stopped via button on web app
Previously, we'd replace the generated message with an error message
when message generation stopped via stop button on chat page of web app.
So the partially generated message (which could be useful) gets lost.

This change just stops generation, while keeping the generated
response so any useful information from the partially generated
message can be retrieved.
2024-11-17 16:29:18 -08:00
Debanjum
285006d6c9 Sync chat models in Khoj with OpenAI proxies (e.g Ollama) on startup
- Allows managing chat models in the OpenAI proxy service like Ollama.
- Removes need to manually add, remove chat models from Khoj Admin Panel
  for these OpenAI compatible API services when enabled.
- Khoj still mantains the chat models configs within Khoj, so they can
  be configured via the Khoj admin panel as usual.
2024-11-17 15:34:36 -08:00
Debanjum
4a7f5d1abe Set API keys in docker-compose.yml to enable web search, scrape tools 2024-11-17 15:34:36 -08:00
Debanjum
d6eece63f4 Use Jina API Key of Jina web scraper if configured in DB
Previously Jina search didn't API key. Now that it does need API key,
we should re-use the API key set in the Jina web scraper config,
otherwise fallback to using JINA_API_KEY from environment variable, if
either is present.

Resolves #978
2024-11-17 15:34:14 -08:00
sabaimran
6531f24ca0 Further improvements for descriptions to LLM on modes, code, diagram, image. 2024-11-17 13:23:57 -08:00
sabaimran
0eba6ce315 When diagram generation fails, save to conversation log
- Update tool name when choosing tools to execute
2024-11-17 13:23:12 -08:00
sabaimran
7e662a05f8 Merge branch 'master' of github.com:khoj-ai/khoj into features/improve-tool-selection 2024-11-17 12:26:55 -08:00
Ilya Khrustalev
00b1af8f99 Fix database connections leakage (#980) 2024-11-17 19:15:05 +01:00
Debanjum
69ef6829c1 Simplify integrating Ollama, OpenAI proxies with Khoj on first run
- Integrate with Ollama or other openai compatible APIs by simply
  setting `OPENAI_API_BASE' environment variable in docker-compose etc.
- Update docs on integrating with Ollama, openai proxies on first run
- Auto populate all chat models supported by openai compatible APIs
- Auto set vision enabled for all commercial models

- Minor
  - Add huggingface cache to khoj_models volume. This is where chat
  models and (now) sentence transformer models are stored by default
  - Reduce verbosity of yarn install of web app. Otherwise hit docker
  log size limit & stops showing remaining logs after web app install
  - Suggest `ollama pull <model_name>` to start it in background
2024-11-17 02:08:20 -08:00
Debanjum
2366fa08b9 Update default vision supported & anthropic chat models on first run
- Update to latest initialize with new claude 3.5 sonnet and haiku models
- Update to set vision enabled for google and anthropic models by
  default. Previously we didn't support but we've supported this for a
  month or two now
2024-11-17 02:08:20 -08:00
Debanjum
23ab258d78 Improve user conversation config details on Admin panel
Show user email and chat model that is associated with the user
conversation config
2024-11-17 02:08:20 -08:00
Debanjum
41d9011a26 Move evaluation script into tests/evals directory
This should give more space for eval scripts, results and readme
2024-11-17 02:08:20 -08:00
Debanjum
d9d5884958 Enable evaluating Khoj on the OpenAI SimpleQA bench using eval script
- Just load the raw csv from OpenAI bucket. Normalize it into FRAMES format
- Improve docstring for frames datasets as well
- Log the load dataset perf timer at info level
2024-11-17 02:08:20 -08:00
Debanjum
eb5bc6d9eb Remove Talc search bench from Khoj eval script 2024-11-17 02:08:20 -08:00
Debanjum
fc45aceecf Delete unused favicon ico in old web app directory 2024-11-17 02:08:20 -08:00
Debanjum
a16fc3ade8 Only add /research prefix when no slash command in message on web app
- Explictly adding a slash command is a higher priority intent than
  research mode being enabled in the background. Respect that for a
  more intuitive UX flow.

- Explicit slash commands do not currently work in research mode.
  You've to turn research mode off to use other slash commands. This
  is strange, unnecessary given intent priority is clear.
2024-11-17 02:08:20 -08:00
sabaimran
a1b4587b34 Remove extract_images flag from PDF loader 2024-11-15 21:46:35 -08:00
sabaimran
15b4cec1e8 Add documentation for how to use the text to image model configs, reduce to Replicate 2024-11-15 15:26:14 -08:00
sabaimran
759873ec44 Add documentation for how to use the text to image model configs 2024-11-15 15:22:06 -08:00
sabaimran
c77dc84a68 Remove output_modes function reference in chat tests 2024-11-15 14:03:07 -08:00
sabaimran
e3f1ea9dee Improve tool, output mode selection process
- JSON extract from LLMs is pretty decent now, so get the input tools and output modes all in one go. It'll help the model think through the full cycle of what it wants to do to handle the request holistically.
- Make slight improvements to tool selection indicators
2024-11-15 13:53:53 -08:00
sabaimran
c1a5b32ebf Do not start server when importing the main.py file, unless gunicorn
- Add more graceful shutdown when closing bg scheduler thread
2024-11-14 17:36:51 -08:00
sabaimran
be3ee5ec9f Add cool new suggestion cards for math, diagramming 2024-11-14 17:36:51 -08:00
Debanjum
9fc44f1a7f Enable evaluation Khoj on the Talc Search Bench using Eval script
- Just load the raw jsonl from Github and normalize it into FRAMES format
- Color printed accuracy in eval script to blue for readability
2024-11-13 22:50:14 -08:00
Debanjum
8e009f48ce Show tool call error in next iteration. Allow rerun if model requests.
Previously errors would get eaten up but the model wouldn't see
anything. And the model wouldn't be allowed re-run the same query-tool
combination in the next iteration.

This update should give it insight into why it didn't get a result. So
it can make an informed (hopefully better) decision on what to do next.
And re-run the previous query if appropriate.
2024-11-13 22:50:14 -08:00
Debanjum
604da90fa8 Wrap try/catch around online search in research mode like other tools
Previously when call to online search API etc. failed, it'd error out
of response to query in research mode. Khoj should skip tool use that
iteration but continue to try respond.
2024-11-13 16:46:09 -08:00
Debanjum
8851b5f78a Standardize chat message truncation and serialization before print
Previously chatml messages were just strings, now they can be list of
strings or list of dicts as well.

- Use json seriallization to manage their variations and truncate them
  before printing for context.
- Put logic in single function for use across chat models
2024-11-13 16:30:17 -08:00
Debanjum
f4e37209a2 Improve error handling, display and configurability of eval script
- Default to evaluation decision of None when either agent or
  evaluator llm fails. This fixes accuracy calculations on errors
- Fix showing color for decision True
- Enable arg flags to specify output results file paths
2024-11-13 14:32:22 -08:00
Debanjum
15b0cfa3dd Improve structured message truncation in logger
Previously chatml messages were just strings.
Since gemini, anthropic models always have messages as list of
strings, truncate those strings instead of the list of message content
2024-11-13 14:32:22 -08:00
Debanjum
153ae8bea9 Cut binary, long output files from code result for context efficiency
Removing binary data and truncating large data in output files
generated by code runs should improve speed and cost of research mode
runs with large or binary output files.

Previously binary data in code results was passed around in iteration
context during research mode. This made the context inefficient
because models have limited efficiency and reasoning capabilities over
b64 encoded image (and other binary) data and would hit context limits
leading to unnecessary truncation of other useful context

Also remove image data when logging output of code execution
2024-11-13 14:32:22 -08:00
sabaimran
de34cc3987 Remove og image url with khoj documentation 2024-11-13 10:23:02 -08:00
sabaimran
4a1b1e8b9a Add support for interrupting messages after they've been sent. 2024-11-12 22:22:45 -08:00
sabaimran
d607ad7a27 Release Khoj version 1.29.1 2024-11-12 10:32:56 -08:00
sabaimran
8ec1764e42 Handle size calculation more gracefully for converted documents, depending on type 2024-11-12 02:00:29 -08:00
sabaimran
b6714c202f Increase the title character limit to 500 for conversations 2024-11-12 01:51:19 -08:00
sabaimran
f05e64cf8c Release Khoj version 1.29.0 2024-11-11 21:46:25 -08:00
sabaimran
47d3c8c235 Remove email query parameter from subscription patch api 2024-11-11 21:39:49 -08:00
sabaimran
d7027109a5 And null handling for response output_files in code output 2024-11-11 21:14:56 -08:00
sabaimran
d68243a3fb Revert clean_json logic temporarily. Eventually, we should do better validation here to extract markdown-formatted json. 2024-11-11 21:05:17 -08:00
sabaimran
1cab6c081f Add better error handling for diagram output, and fix chat history construct
- Make the `clean_json` method more robust as well
2024-11-11 20:44:19 -08:00
sabaimran
7bd2f83f97 Wrap test in suggestionCard 2024-11-11 20:12:46 -08:00
Debanjum
48862a8400 Enable Passing External Documents for Analysis in Code Sandbox (#960)
- Allow passing user files as input into code sandbox for analysis
- Update prompt to give more example of complex, multi-line code
- Simplify logic for model. Run one program at a time, 
  instead of allowing model to run multiple programs in parallel
- Show Code generated charts and docs in Reference pane of web app and make them downloaded
2024-11-11 19:37:17 -08:00
Debanjum
5078ac0ce2 Await on conversation save when generate conversation title via API 2024-11-11 19:17:39 -08:00
Debanjum
e1d0015248 Allow disabling Khoj telemetry via KHOJ_TELEMETRY_DISABLE env var 2024-11-11 19:17:39 -08:00
Debanjum
a52500d289 Show generated code artifacts before notes and online references 2024-11-11 18:00:22 -08:00
Debanjum
218eed83cd Show output file not code on hover. Remove reference card title border 2024-11-11 18:00:22 -08:00
Debanjum
b970cfd4b3 Align styling of reference panel card across code, docs, web results
- Add a border below heading
- Show code snippet in pre block
- Overflow-x when reference side panel open to allow seeing whole text
  via x-scroll
- Align header, body position of reference cards with each other
- Only show filename in doc reference cards at message bottom.
  Show full file path in hover and reference side panel
2024-11-11 18:00:22 -08:00
Debanjum
8e9f4262a9 Render code output files with code references in reference section
- Improve rendering code reference with better icons, smaller text and
  different line clamps for better visibility
- Show code output files as sub card of code card in reference section
- Allow downloading files generated by code instead of rendering it in
  chat message directly
- Show executed code before online references in reference panel
2024-11-11 18:00:22 -08:00
Debanjum
92c1efe6ee Fixes to render & save code context with non text based output modes
- Fix to render code generated chart with images, excalidraw diagrams
- Fix to save code context to chat history in image, diagram output modes
- Fix bug in image markdown being wrapped twice in markdown syntax
- Render newline in code references shown on chat page of web app
  Previously newlines weren't getting rendered. This made the code
  executed by Khoj hard to read in references. This changes fixes that.

  `dangerouslySetInnerHTML' usage is justified as rendered code
  snippet is being sanitized by DOMPurify before rendering.
2024-11-11 18:00:22 -08:00
Debanjum
af0215765c Decode code text output files from b64 to str to ease client processing 2024-11-11 18:00:22 -08:00
Debanjum
7b39f2014a Enable analysing user documents in code sandbox and other improvements
- Run one program at a time, instead of allowing model to pass
  multiple programs to run in parallel to simplify logic for model
- Update prompt to give more example of complex, multi-line code
- Allow passing user files as input into code sandbox for analysis

- Log code execution timer at info level to evaluate execution latencies
  in production

- Type the generated code for easier processing by caller functions
2024-11-11 17:59:37 -08:00
sabaimran
dc109559d4 Research mode gray when off, colored when on 2024-11-11 16:35:07 -08:00
sabaimran
cdda9c2e73 Improve text wrapping for attached files and preview context
For the research mode toggle, make it not fill when it's off
2024-11-11 13:32:10 -08:00
sabaimran
dd36303bb7 Fix sending file attachments in save_to_conversation method
- When files attached but upload fails, don't update the state variables
- Make removing null characters in pdf extraction more space efficient
2024-11-11 12:53:06 -08:00
Debanjum
ba2471dc02 Do not CRUD on entries, files & conversations in DB for null user (#958)
Increase defense-in-depth by reducing paths to create, read, update or delete entries, files and conversations in DB when user is unset.
2024-11-11 12:47:22 -08:00
Debanjum
536fe994be Remove unused db adapter methods, like for fact checker data store 2024-11-11 12:22:34 -08:00
Debanjum
10bca6fa8f Convert required user param check into decorator. Use with more adapters 2024-11-11 12:22:32 -08:00
Debanjum
ff5c10c221 Do not CRUD on entries, files & conversations in DB for null user
Increase defense-in-depth by reducing paths to create, read, update or
delete entries, files and conversations in DB when user is unset.
2024-11-11 12:20:07 -08:00
sabaimran
27fa39353e Make custom agent creation flow available to everyone
- For private agents, add guardrails to prevent against any misuse or violation of terms of service.
2024-11-11 11:54:59 -08:00
sabaimran
b563f46a2e Merge pull request #957 from khoj-ai/features/include-full-file-in-convo-with-filter
Support including file attachments in the chat message

Now that models have much larger context windows, we can reasonably include full texts of certain files in the messages. Do this when an explicit file filter is set in a conversation. Do so in a separate user message in order to mitigate any confusion in the operation.

Pipe the relevant attached_files context through all methods calling into models.

This breaks certain prior behaviors. We will no longer automatically be processing/generating embeddings on the backend and adding documents to the "brain". You'll have to go to settings and go through the upload documents flow there in order to add docs to the brain (i.e., have search include them during question / response).
2024-11-11 11:34:42 -08:00
sabaimran
2bb2ff27a4 Rename attached_files to query_files. Update relevant backend and client-side code. 2024-11-11 11:21:26 -08:00
sabaimran
47937d5148 Merge branch 'features/include-full-file-in-convo-with-filter' of github.com:khoj-ai/khoj into features/include-full-file-in-convo-with-filter 2024-11-11 09:34:08 -08:00
sabaimran
ae4eb96d48 Consolidate file name to icon mapping 2024-11-11 09:34:04 -08:00
Debanjum
7954f39633 Use accept param to file input to indicate supported file types in web app
Remove unused total size calculations in chat input
2024-11-11 04:06:17 -08:00
Debanjum
4223b355dc Use python stdlib methods to write pdf, docx to temp files for loaders
Use python standard method tempfile.NamedTemporaryFile to write,
delete temporary files safely.
2024-11-11 03:24:50 -08:00
Debanjum
fd15fc1e59 Move construct chat history back to it's original position in file
Keep function where it original was allows tracking diffs and change
history more easily
2024-11-11 03:24:50 -08:00
Debanjum
35d6c792e4 Show snippet of truncated messages in debug logs to avoid log flooding 2024-11-11 02:30:38 -08:00
sabaimran
8805e731fd Merge branch 'master' of github.com:khoj-ai/khoj into features/include-full-file-in-convo-with-filter 2024-11-10 19:24:11 -08:00
sabaimran
a5e2b9e745 Exit early when running an automation if the conversation for the automation does not exist. 2024-11-10 19:22:21 -08:00
sabaimran
55200be4fa Apply agent color fill to the toggle both in off and on states 2024-11-10 19:16:43 -08:00
Debanjum
7468f6a6ed Deduplicate online references returned by chat API to clients
This will ensure only unique online references are shown in all
clients.

The duplication issue was exacerbated in research mode as even with
different online search queries, you can get previously seen results.

This change does a global deduplication across all online results seen
across research iterations before returning them in client reponse.
2024-11-10 16:10:32 -08:00
Debanjum
137687ee49 Deduplicate searches in normal mode & across research iterations
- Deduplicate online, doc search queries across research iterations.
  This avoids running previously run online, doc searches again and
  dedupes online, doc context seen by model to generate response.
- Deduplicate online search queries generated by chat model for each
  user query.
- Do not pass online, docs, code context separately when generate
  response in research mode. These are already collected in the meta
  research passed with the user query
- Improve formatting of context passed to generate research response
  - Use xml tags to delimit context. Pass per iteration queries in each
    iteration result
  - Put user query before meta research results in user message passed
    for generating response

This deduplications will improve speed, cost & quality of research mode
2024-11-10 16:10:32 -08:00
Debanjum
306f7a2132 Show error in picking next tool to researcher llm in next iteration
Previously the whole research mode response would fail if the pick
next tool call to chat model failed. Now instead of it completely
failing, the researcher actor is told to try again in next iteration.

This allows for a more graceful degradation in answering a research
question even if a (few?) calls to the chat model fail.
2024-11-10 14:52:02 -08:00
Debanjum
eb492f3025 Only keep webpage content requested, even if Jina API gets more data
Jina search API returns content of all webpages in search results.
Previously code wouldn't remove content beyond max_webpages_to_read
limit set. Now, webpage content in organic results aree explicitly
removed beyond the requested max_webpage_to_read limit.

This should align behavior of online results from Jina with other
online search providers. And restrict llm context to a reasonable size
when using Jina for online search.
2024-11-10 14:51:16 -08:00
Debanjum
8ef7892c5e Exclude non-dictionary doc context from chat history sent to chat models
This fixes chat with old chat sessions. Fixes issue with old Whatsapp
users can't chat with Khoj because chat history doc context was
stored as a list earlier
2024-11-10 14:51:16 -08:00
Debanjum
d892ab3174 Fix handling of command rate limit and improve rate limit messages
Command rate limit wouldn't be shown to user as server wouldn't be
able to handle HTTP exception in the middle of streaming.

Catch exception and render it as LLM response message instead for
visibility into command rate limiting to user on client

Log rate limmit messages for all rate limit events on server as info
messages

Convert exception messages into first person responses by Khoj to
prevent breaking the third wall and provide more details on wht
happened and possible ways to resolve them.
2024-11-10 14:51:16 -08:00
Debanjum
80ee35b9b1 Wrap messages in web, obsidian UI to stay within screen when long links
Wrap long links etc. in chat messages and train of thought lists on
web app app and obsidian plugin by breaking them into newlines by word
2024-11-10 14:49:51 -08:00
Debanjum
f967bdf702 Show correct example index being currently processed in frames eval
Previously the batch start index wasn't being passed so all batches
started in parallel were showing the same processing example index

This change doesn't impact the evaluation itself, just the index shown
of the example currently being evaluated
2024-11-10 14:49:51 -08:00
Debanjum
84a8088c2b Only evaluate non-empty responses to reduce eval script latency, cost
Empty responses by Khoj will always be an incorrect response, so no
need to make call to an evaluator agent to check that
2024-11-10 14:49:51 -08:00
sabaimran
170d959feb Handle offline messages differently, as they don't respond well to the structured messages 2024-11-09 19:52:46 -08:00
sabaimran
2c543bedd7 Add typing to the constructed messages listed 2024-11-09 19:40:27 -08:00
sabaimran
79b15e4594 Only add images when they're present and vision enabled 2024-11-09 19:37:30 -08:00
sabaimran
bd55028115 Fix randint import from random when creating filenames for tmp 2024-11-09 19:17:18 -08:00
sabaimran
92b6b3ef7b Add attached files to latest structured message in chat ml format 2024-11-09 19:17:00 -08:00
sabaimran
835fa80a4b Allow docx conversion in the chatFunction.ts 2024-11-09 18:51:00 -08:00
sabaimran
459318be13 And random suffixes to decreases any clash probability when writing tmp files to disc 2024-11-09 18:46:34 -08:00
sabaimran
dbf0c26247 Remove _summary_ description in function descriptions 2024-11-09 18:42:42 -08:00
sabaimran
e5ac076fc4 Move construct_chat_history method back to conversation.utils.py 2024-11-09 18:27:46 -08:00
sabaimran
bc95a99fb4 Make tracer the last input parameter for all the relevant chat helper methods 2024-11-09 18:22:46 -08:00
sabaimran
ceb29eae74 Add phone number verification and remove telemetry update call from place where authentication middleware isn't yet installed (in the middleware itself). 2024-11-09 12:25:36 -08:00
sabaimran
3badb27744 Remove stored uploaded files after they're processed. 2024-11-08 23:28:02 -08:00
sabaimran
78630603f4 Delete the fact checker application 2024-11-08 17:27:42 -08:00
sabaimran
807687a0ac Automatically generate titles for conversations from history 2024-11-08 16:02:34 -08:00
sabaimran
7159b0b735 Enforce limits on file size when converting to text 2024-11-08 15:27:28 -08:00
sabaimran
4695174149 Add support for file preview in the chat input area (before message sent) 2024-11-08 15:12:48 -08:00
sabaimran
ad46b0e718 Label pages when extract text from pdf, docs content. Fix scroll area in doc preview. 2024-11-08 14:53:20 -08:00
sabaimran
ee062d1c48 Fix parsing for PDFs via content indexing API 2024-11-07 18:17:29 -08:00
sabaimran
623a97a9ee Merge branch 'master' of github.com:khoj-ai/khoj into features/include-full-file-in-convo-with-filter 2024-11-07 17:18:23 -08:00
sabaimran
33498d876b Simplify the share chat page. Don't need it to maintain its own conversation history
- When chatting on a shared page, fork and redirect to a new conversation page
2024-11-07 17:14:11 -08:00
sabaimran
4b8be55958 Convert UUID to string when forking a conversation 2024-11-07 17:13:04 -08:00
sabaimran
9bbe27fe36 Set default value of attached files to empty list 2024-11-07 17:12:45 -08:00
sabaimran
3a51996f64 Process attached files in the chat history and add them to the chat message 2024-11-07 16:06:58 -08:00
sabaimran
a89160e2f7 Add support for converting an attached doc and chatting with it
- Document is first converted in the chatinputarea, then sent to the chat component. From there, it's sent in the chat API body and then processed by the backend
- We couldn't directly use a UploadFile type in the backend API because we'd have to convert the api type to a multipart form. This would require other client side migrations without uniform benefit, which is why we do it in this two-phase process. This also gives us capacity to repurpose the moe generic interface down the road.
2024-11-07 16:06:37 -08:00
sabaimran
e521853895 Remove unnecessary console.log statements 2024-11-07 16:03:31 -08:00
sabaimran
92c3b9c502 Add function to get an icon from a file type 2024-11-07 16:02:53 -08:00
sabaimran
140c67f6b5 Remove focus ring from the text area component 2024-11-07 16:02:02 -08:00
sabaimran
b8ed98530f Accept attached files in the chat API
- weave through all subsequent subcalls to models, where relevant, and save to conversation log
2024-11-07 16:01:48 -08:00
sabaimran
ecc81e06a7 Add separate methods for docx and pdf files to just convert files to raw text, before further processing 2024-11-07 16:01:08 -08:00
sabaimran
394035136d Add an api that gets a document, and converts it to just text 2024-11-07 16:00:10 -08:00
sabaimran
3b1e8462cd Include attach files in calls to extract questions 2024-11-07 15:59:15 -08:00
sabaimran
de73cbc610 Add support for relaying attached files through backend calls to models 2024-11-07 15:58:52 -08:00
Debanjum
4cad96ded6 Add Script to Evaluate Khoj on Google's FRAMES benchmark (#955)
- Why
We need better, automated evals to measure performance shifts of Khoj
across prompt, model and capability changes.

Google's FRAMES benchmark evaluates multi-step retrieval and reasoning
capabilities of AI agents. It's a good starter benchmark to evaluate Khoj.

- Details
This PR adds an eval script to evaluate Khoj responses on the the FRAMES
benchmark prompts against the ground truth provided by it.

Script allows configuring sample size, batch size, sampling queries from the
eval dataset.

Gemini is used as an LLM Judge to auto grade Khoj responses vs ground truth 
data from the benchmark.
2024-11-06 17:52:01 -08:00
Debanjum
8679294bed Remove need to set server chat settings from use openai proxies docs
This was previously required, but now it's only usefuly for more
advanced settings, not typical for self-hosting users.

With recent updates, the user's selected chat model is used for both
Khoj's train of thought and response. This makes it easy to
switch your preferred chat model directly from the user settings
page and not have to update this in the admin panel as well.

Reflect these code changse in the docs, by removing the unnecessary
step for self-hosted users to create a server chat setting when using
an OpenAI proxy service like Ollama, LiteLLM etc.
2024-11-05 17:10:53 -08:00
Debanjum
05a93fcbed v-align attach, send buttons with chat input text area on web app
Otherwise, those buttons look off-center when images are attached to
the chat input area
2024-11-05 17:10:53 -08:00
sabaimran
a0480d5f6c use fill weight for the toggle right (enabled state) for research mode 2024-11-04 22:01:09 -08:00
sabaimran
dc26da0a12 Add uploaded files in the conversation file filter for a new convo 2024-11-04 22:00:47 -08:00
Debanjum
b51ee644aa Fix escaping filename when normalizing in org node parser 2024-11-04 20:24:57 -08:00
Debanjum
5724d16a6f Fix passing images to anthropic chat models to extract questions 2024-11-04 20:24:57 -08:00
sabaimran
cf0bcec0e7 Revert SKIP_TESTS flag in offline chat director tests 2024-11-04 19:06:54 -08:00
sabaimran
1f372bf2b1 Update file summarization unit tests now that multiple files are allowed 2024-11-04 17:45:54 -08:00
sabaimran
7543360210 Merge branch 'master' of github.com:khoj-ai/khoj into features/include-full-file-in-convo-with-filter 2024-11-04 16:55:48 -08:00
sabaimran
b6145df3be Handle file retrieval when agent is None 2024-11-04 16:55:22 -08:00
sabaimran
3dc9139cee Add additional handling for when file_object comes back empty 2024-11-04 16:53:07 -08:00
sabaimran
a27b8d3e54 Remove summarize condition for only 1 file filter 2024-11-04 16:51:37 -08:00
sabaimran
362bdebd02 Add methods for reading full files by name and including context
Now that models have much larger context windows, we can reasonably include full texts of certain files in the messages. Do this when an explicit file filter is set in a conversation. Do so in a separate user message in order to mitigate any confusion in the operation.

Pipe the relevant attached_files context through all methods calling into models.

We'll want to limit the file sizes for which this is used and provide more helpful UI indicators that this sort of behavior is taking place.
2024-11-04 16:37:13 -08:00
sabaimran
e3ca52b7cb Use .get() to get text accompanying image url, instead of subindexing 2024-11-04 16:09:16 -08:00
sabaimran
1e89baca7b Deprecate the UserSearchModelConfig and remove all references
- The server has moved to a model of standardization for the embeddings generation workflow. Remove references to the support for differentiated models.
- The migration script fo ra new model needs to be updated to accommodate full regeneration.
2024-11-04 12:24:41 -08:00
Debanjum
1ccbf72752 Use logger instead of print to track eval 2024-11-04 00:40:26 -08:00
sabaimran
99c1d2831a Release Khoj version 1.28.3 2024-11-02 12:23:11 -07:00
sabaimran
075b4ecf15 Call subscription_to_state with sync_to_async wrapper when getting user subscription state
- This is needed in case the renewal_date is not set and we need to reset it for the user
2024-11-02 12:22:35 -07:00
sabaimran
ec44cbe1e7 Release Khoj version 1.28.2 2024-11-02 07:53:51 -07:00
Debanjum
791eb205f6 Run prompt batches in parallel for faster eval runs 2024-11-02 04:58:03 -07:00
Debanjum
96904e0769 Add script to evaluate khoj on Google's FRAMES benchmark
Google's FRAMES benchmark evaluates multi-step retrieval and reasoning
capabilities of an agent.

The script uses Gemini as an LLM Judge to evaluate Khoj responses to
the FRAMES benchmark prompts against the ground truth provided by it.
2024-11-02 04:57:42 -07:00
Debanjum
31b5fde163 Only enable prompt tracer if git python is installed 2024-11-02 02:07:02 -07:00
sabaimran
5b18dc96e0 Release Khoj version 1.28.1 2024-11-01 22:51:51 -07:00
sabaimran
8d1b1bc78e Move the git python dependency into top level dependencies 2024-11-01 22:51:00 -07:00
Debanjum
e85dd59295 Release Khoj version 1.28.0 2024-11-01 19:06:59 -07:00
Debanjum
1f79a10541 Fix link to code execution feature in docs 2024-11-01 18:22:21 -07:00
Debanjum
cff8e02b60 Research Mode [Part 2]: Improve Prompts, Edit Chat Messages. Set LLM Seed for Reproducibility (#954)
- Improve chat actors and their prompts for research mode.
- Add documentation to enable the code tool when self-hosting Khoj
- Edit Chat Messages
  - Store Turn Id in each chat message. 
  - Expose API to delete chat message.
  - Expose delete chat message button to turn delete chat message from web app
- Set LLM Generation Seed for Reproducible Debugging and Testing
  - Setting seed for LLM generation is supported by Llama.cpp and OpenAI models. 
    This can (somewhat) restrain LLM output
  - Getting fixed responses for fixed inputs helps test, debug longer reasoning chains like used in advanced reasoning
2024-11-01 18:16:42 -07:00
Debanjum
14e453039d Add prompt tracing, agent personality to infer webpage urls chat actor 2024-11-01 18:12:50 -07:00
Debanjum
ab321dc518 Expect query before tool in response to give think space in research prompt 2024-11-01 17:51:41 -07:00
Debanjum
1a83bbcc94 Clean API chat router. Move FeedbackData response type to router helper 2024-11-01 17:51:41 -07:00
sabaimran
e6eb87bbb5 Merge branch 'improve-debug-reasoning-and-other-misc-fixes' of github.com:khoj-ai/khoj into improve-debug-reasoning-and-other-misc-fixes 2024-11-01 16:48:39 -07:00
sabaimran
a213b593e8 Limit the number of urls the webscraper can extract for scraping 2024-11-01 16:48:36 -07:00
sabaimran
327fcb8f62 create defiltered query after conversation command is extracted 2024-11-01 16:48:03 -07:00
sabaimran
b79a9ec36d Clarify description of the code evaluation environment: not for document creation 2024-11-01 16:47:27 -07:00
Debanjum
9c7b36dc69 Use standard per minute rate limits across user types 2024-11-01 16:16:06 -07:00
Debanjum
ac21b10dd5 Simplify logic to get default search model. Remove unused import 2024-11-01 15:14:00 -07:00
sabaimran
2b35790165 Merge branch 'master' of github.com:khoj-ai/khoj into improve-debug-reasoning-and-other-misc-fixes 2024-11-01 14:51:26 -07:00
Debanjum
22f3ed3f5d Research Mode: Give Khoj the ability to perform more advanced reasoning (#952)
## Overview
Khoj can now go into research mode and use a python code interpreter. These are experimental features that are being released early for feedback and testing.

- Research mode allows Khoj to dynamically select the tools it needs to best answer the question. It is also allowed more iterations to get to a satisfactory answer. Its more dynamic train of thought is shown to improve visibility into its thinking.
- Adding ability for Khoj to use a python code interpreter is an adjacent capability. It can help Khoj do some data analysis and generate charts for you. A sandboxed python to run code is provided using [cohere-terrarium](https://github.com/cohere-ai/cohere-terrarium?tab=readme-ov-file), [pyodide](https://pyodide.org/).

## Analysis
Research mode (significantly?) improves Khoj's information retrieval for more complex queries requiring multi-step lookups but takes longer to run. It can research for longer, requiring less back-n-forth with the user to find an answer.

Research mode gives most gains when used with more advanced chat models (like o1, 4o, new claude sonnet and gemini-pro-002). Smaller models improve their response quality but tend to get into repetitive loops more often. 

## Next Steps
- Get community feedback on research mode. What works, what fails, what is confusing, what'd be cool to have.
- Tune Khoj's capabilities for longer autonomous runs and to generalize across a larger range of model sizes

## Miscellaneous Improvements
- Khoj's train of thought is saved and shown for all messages, not just the latest one
- Render charts generated by Khoj and code running using the code tool on the web app
- Align chat input color to currently selected agent color
2024-11-01 14:46:29 -07:00
sabaimran
baa939f4ce When running code, strip any code delimiters. Disable application json type specification in Gemini request. 2024-11-01 13:47:39 -07:00
sabaimran
8fd2fe162f Determine if research mode is enabled by checking the conversation commands and 'linting' them in the selection phase 2024-11-01 13:12:34 -07:00
sabaimran
cead1598b9 Don't reset research mode after completing research execution 2024-11-01 13:00:11 -07:00
Debanjum
c1c779a7ef Do not yaml format raw code results in context for LLM. It's confusing 2024-11-01 12:45:26 -07:00
sabaimran
b3dad1f393 Standardize rate limits to 1/6 ratio 2024-11-01 12:21:09 -07:00
sabaimran
23a49b6b95 Add documentation for python code execution capability 2024-11-01 12:14:33 -07:00
Debanjum
cd75151431 Do not allow auto selecting research mode as tool for now.
You are required to manually turning it on. This takes longer and
should be a high intent activity initiated by user
2024-11-01 12:07:52 -07:00
Debanjum
0b0cfb35e6 Simplify in research mode check in api_chat.
- Dedent code for readability
- Use better name for in research mode check
- Continue to remove inferred summarize command when multiple files in
  file filter even when not in research mode
- Continue to show select information source train of thought.
  It was removed by mistake earlier
2024-11-01 12:07:08 -07:00
sabaimran
ffa7f95559 Add template for a code sandbox to the docker-compose configuration 2024-11-01 11:50:58 -07:00
Debanjum
73750ef286 Merge branch 'master' into features/advanced-reasoning 2024-11-01 11:42:01 -07:00
sabaimran
1fc280db35 Handle case where infer_webpage_url returns no valid urls 2024-11-01 11:41:32 -07:00
Debanjum
1c920273dd Add Prompt Tracer to Visualize, Analyze and Debug Khoj's Train of Thought (#951)
## Overview
Use git to capture prompt traces of khoj's train of thought. View, analyze and debug them using your favorite git client (e.g vscode, magit).

- Each commit captures an interaction with an LLM
  The commit writes the query, response and system message each to a separate file in the repo.
  The commit message captures the chat model, Khoj version and other metadata
- Each conversation turn can have multiple interactions with an LLM (e.g Khoj's train of thought)
- Each new conversation turn forks from and merges back into its conversation branch
- Each new conversation branches from the user branch
- Each new user branches from root commit on the main branch

## Usage
1. Set `KHOJ_DEBUG=true` or start khoj in very verbose mode with `khoj -vv` to turn on prompt tracing
2. Chat with Khoj as usual 
3. Open the promptrace git repo to view the generated prompt traces using your favorite git porcelain. 
   The Khoj prompt trace git repo is created at `/tmp/khoj_promptrace` by default. You can configure the prompt trace directory by setting the `PROMPTRACE_DIR`environment variable.

## Implementation
- Add utility functions to capture prompt traces using git (via `gitpython`)
- Make each model provider in Khoj commit their LLM interactions with promptrace
- Weave chat metadata from chat API through all chat actors and commit it to the prompt trace
2024-11-01 11:33:54 -07:00
sabaimran
33d36ee58c Add experimental notice to research mode tooltip 2024-11-01 11:00:27 -07:00
sabaimran
0145b2a366 Set usage limits on the research mode 2024-11-01 10:29:33 -07:00
sabaimran
3ea94ac972 Only include inferred-queries in chat history when present 2024-10-31 22:01:41 -07:00
sabaimran
149cbe1019 Use bottom anchor for the commandbar popover 2024-10-31 20:40:38 -07:00
sabaimran
21858acccc Remove conversation command always in query, filter out inferred queries that were not with selected tool when going through tool selection iterations 2024-10-31 20:27:38 -07:00
sabaimran
19241805ee Merge branch 'master' of github.com:khoj-ai/khoj into improve-debug-reasoning-and-other-misc-fixes 2024-10-31 18:20:23 -07:00
Debanjum
302bd51d17 Improve online chat actor prompt for research and normal mode
- Match the online query generator prompt to match the formatting of
  extract questions
- Separate iteration results by newline
- Improve webpage and online tool descriptions
2024-10-31 18:17:12 -07:00
Debanjum
52163fe299 Improve research planner prompt to reduce looping 2024-10-31 18:17:01 -07:00
sabaimran
7ebf999688 Merge branch 'master' of github.com:khoj-ai/khoj into features/advanced-reasoning 2024-10-31 18:15:13 -07:00
sabaimran
159ea44883 Remove frame references in the diagramming prompts 2024-10-31 18:14:51 -07:00
Debanjum
89597aefe9 Json dump contents in prompt tracer to make structure discernable 2024-10-31 18:08:42 -07:00
Debanjum
5b15176e20 Only add /research prefix in research mode if not already in user query 2024-10-31 18:08:42 -07:00
sabaimran
559601dd0a Do not exit if/else loop in research loop when notes not found 2024-10-31 13:51:10 -07:00
sabaimran
a13760640c Only show trash can when turnId is present 2024-10-31 13:19:16 -07:00
sabaimran
8d1ecb9bd8 Add optional brew steps for docker install 2024-10-31 12:41:53 -07:00
Debanjum
adca6cbe9d Merge branch 'master' into add-prompt-tracer-for-observability 2024-10-31 02:28:34 -07:00
Debanjum
e17dc9f7b5 Put train of thought ui before Khoj response on web app 2024-10-31 02:24:53 -07:00
Debanjum
e8e6ead39f Fix deleting new messages generated after conversation load 2024-10-30 20:56:38 -07:00
Debanjum
cb90abc660 Resolve train of thought component needs unique key id error on web app 2024-10-30 14:00:21 -07:00
Debanjum
ca5a6831b6 Add ability to delete messages from the web app 2024-10-30 14:00:21 -07:00
Debanjum
ba15686682 Store turn id with each chat message. Expose API to delete chat turn
Each chat turn is a user query, khoj response message pair
2024-10-30 14:00:21 -07:00
Debanjum
f64f5b3b6e Handle add/delete file filter operation on non-existent conversation 2024-10-30 14:00:21 -07:00
Debanjum
b3a63017b5 Support setting seed for reproducible LLM response generation
Anthropic models do not support seed. But offline, gemini and openai
models do. Use these to debug and test Khoj via KHOJ_LLM_SEED env var
2024-10-30 14:00:21 -07:00
Debanjum
d44e68ba01 Improve handling embedding model config from admin interface
- Allow server to start if loading embedding model fails with an error.
  This allows fixing the embedding model config via admin panel.

  Previously server failed to start if embedding model was configured
  incorrectly. This prevented fixing the model config via admin panel.

- Convert boolean string in config json to actual booleans when passed
  via admin panel as json before passing to model, query configs

- Only create default model if no search model configured by admin.
  Return first created search model if its been configured by admin.
2024-10-30 14:00:21 -07:00
Debanjum
358a6ce95d Defer turning cursor color to selected agents color for later
Capability exists but idea needs to be investigated further
2024-10-30 14:00:21 -07:00
Debanjum
2ac840e3f2 Make cursor in chat input take on selected agent color 2024-10-30 14:00:21 -07:00
Debanjum
1448b8b3fc Use 3rd person for user in research prompt to reduce person confusion
Models were getting a bit confused about who is search for who's
information. Using third person to explicitly call out on who's behalf
these searches are running seems to perform better across
models (gemini's, gpt etc.), even if the role of the message is user.
2024-10-30 13:49:48 -07:00
Debanjum
b8c6989677 Separate example from actual question in extract question prompt 2024-10-30 13:49:48 -07:00
Debanjum
86ffd7a7a2 Handle \n, dedupe json cleaning into single function for reusability
Use placeholder for newline in json object values until json parsed
and values extracted. This is useful when research mode models outputs
multi-line codeblocks in queries etc.
2024-10-30 13:49:48 -07:00
Debanjum
83ca820abe Encourage Anthropic models to output json object using { prefill
Anthropic API doesn't have ability to enforce response with valid json
object, unlike all the other model types.

While the model will usually adhere to json output instructions.

This step is meant to more strongly encourage it to just output json
object when response_type of json_object is requested.
2024-10-30 13:49:48 -07:00
Debanjum
dc8e89b5de Pass tool AIs iteration history as chat history for better context
Separate conversation history with user from the conversation history
between the tool AIs and the researcher AI.

Tools AIs don't need top level conversation history, that context is
meant for the researcher AI.

The invoked tool AIs need previous attempts at using the tool in this
research runs iteration history to better tune their next run.

Or at least that is the hypothesis to break the models looping.
2024-10-30 13:49:48 -07:00
Debanjum
d865994062 Rename code tool arg previous_iteration_history' to context' 2024-10-30 13:49:48 -07:00
Debanjum
06aeca2670 Make researcher, docs search AIs ask more diverse retrieval questions
Models weren't generating a diverse enough set of questions. They'd do
minor variations on the original query. What is required is asking
queries from a bunch of different lenses to retrieve the requisite
information.

This prompt updates shows the AIs the breadth of questions to by
example and instruction. Seem like performance improved based on vibes
2024-10-30 13:49:48 -07:00
Debanjum
01881dc7a2 Revert "Make extract question prompt in 1st person wrt user as its a user message"
This reverts commit 6d3602798aa1b95a30c557576fd4f93ddef2ae76.
2024-10-30 13:49:48 -07:00
Debanjum
3e695df198 Make extract question prompt in 1st person wrt user as its a user message
Divide Example from Actual chat history section in prompt
2024-10-30 13:49:48 -07:00
Debanjum
a3751d6a04 Make extract relevant information system prompt work for any document
Previously it was too strongly tuned for extracting information from
only webpages. This shouldn't be necessary
2024-10-30 13:49:48 -07:00
Debanjum
a39e747d07 Improve passing user name in pick next research tool prompt 2024-10-30 13:49:48 -07:00
Debanjum
deff512baa Improve research mode prompts to reduce looping, increase webpage reads 2024-10-30 13:49:48 -07:00
Debanjum
d3184ae39a Simplify storing and displaying document results in research mode
- Mention count of notes and files disovered
- Store query associated with each compiled reference retrieved for
  easier referencing
2024-10-30 13:49:48 -07:00
Debanjum
8bd94bf855 Do not use a message branch if no msg id provided to prompt tracer 2024-10-30 13:49:48 -07:00
sabaimran
b63fbc5345 Add a simple badget to the dropdown menu that shows subscription status 2024-10-30 13:00:16 -07:00
sabaimran
82f3d79064 Merge branch 'master' of github.com:khoj-ai/khoj into features/advanced-reasoning 2024-10-30 11:32:10 -07:00
sabaimran
2b2564257e Handle subscription case where it's set to trial, but renewal_date is not set. set the renewal_date for LENGTH_OF_FREE_TRIAL days from subscription creation. 2024-10-30 11:05:31 -07:00
Debanjum
9935d4db0b Do not use a message branch if no msg id provided to prompt tracer 2024-10-28 17:50:27 -07:00
Debanjum
d184498038 Pass context in separate message from user query to research chat actor 2024-10-28 15:37:28 -07:00
Debanjum
d75ce4a9e3 Format online, notes, code context with YAML to be legibile for LLM 2024-10-28 15:37:28 -07:00
sabaimran
5bea0c705b Use break-words in the train of thought for better formatting 2024-10-28 15:36:06 -07:00
sabaimran
1f1b182461 Automatically carry over research mode from home page to chat
- Improve mobile friendliness with new research mode toggle, since chat input area is now taking up more space
- Remove clunky title from the suggestion card
- Fix fk lookup error for agent.creator
2024-10-28 15:29:24 -07:00
sabaimran
ebaed53069 Merge branch 'master' of github.com:khoj-ai/khoj into features/advanced-reasoning 2024-10-28 12:39:00 -07:00
sabaimran
889dbd738a Add keyword diagram to diagram output mode description 2024-10-28 12:20:46 -07:00
Debanjum
50ffd7f199 Merge branch 'master' into features/advanced-reasoning 2024-10-28 04:10:59 -07:00
Debanjum
a5d0ca6e1c Use selected agent color to theme the chat input area on home page 2024-10-28 03:47:40 -07:00
Debanjum
aad7528d1b Render slash commands popup below chat input text area on home page 2024-10-28 02:06:04 -07:00
Debanjum
3e17ab438a Separate notes, online context from user message sent to chat models (#950)
Overview
---
- Put context into separate user message before sending to chat model.
  This should improve model response quality and truncation logic in code
- Pass online context from chat history to chat model for response.
  This should improve response speed when previous online context can be reused
- Improve format of notes, online context passed to chat models in prompt.
  This should improve model response quality

Details
---
The document, online search context are now passed as separate user
messages to chat model, instead of being added to the final user message.

This will improve
- Models ability to differentiate data from user query.
  That should improve response quality and reduce prompt injection
  probability
- Make truncation logic simpler and more robust
  When context window hit, can simply pop messages to auto truncate
  context in order of context, user, assistant message for each
  conversation turn in history until reach current user query

  The complex, brittle logic to extract user query from context in
  last user message isn't required.
2024-10-28 02:03:18 -07:00
Debanjum
8ddd70f3a9 Put context into separate message before sending to offline chat model
Align context passed to offline chat model with other chat models

- Pass context in separate message for better separation between user
  query and the shared context
- Pass filename in context
- Add online results for webpage conversation command
2024-10-28 00:22:21 -07:00
Debanjum
ee0789eb3d Mark context messages with user role as context role isn't being used
Context role was added to allow change message truncation order based
on context role as well.

Revert it for now since currently this is not currently being done.
2024-10-28 00:04:14 -07:00
Debanjum
4e39088f5b Make agent name in home page carousel not text wrap on mobile 2024-10-27 23:03:53 -07:00
Debanjum
94074b7007 Focus chat input on toggle research mode. v-align it with send button 2024-10-27 22:54:55 -07:00
sabaimran
a691ce4aa6 Batch entries into smaller groups to process 2024-10-27 20:43:41 -07:00
sabaimran
2924909692 Add a research mode toggle to the chat input area 2024-10-27 16:37:40 -07:00
sabaimran
68499e253b Auto-collapse train of thought, show after chat response in history 2024-10-27 15:48:13 -07:00
sabaimran
101ea6efb1 Add research mode as a slash command, remove from default path 2024-10-27 15:47:44 -07:00
sabaimran
0bd78791ca Let user exit from command mode with esc, click out, etc. 2024-10-27 15:01:49 -07:00
sabaimran
a121d67b10 Persist the train of thought in the conversation history 2024-10-26 23:46:15 -07:00
sabaimran
9e8ac7f89e Fix input/output mismatches in the /summarize command 2024-10-26 16:37:58 -07:00
sabaimran
e4285941d1 Use the advanced chat model if the user is subscribed 2024-10-26 16:00:54 -07:00
sabaimran
33e48aa27e Merge branch 'add-prompt-tracer-for-observability' of github.com:khoj-ai/khoj into features/advanced-reasoning 2024-10-26 14:09:00 -07:00
sabaimran
fd71a4b086 Add better exception handling in the prompt trace logic, use default value from parameters 2024-10-26 14:08:00 -07:00
Debanjum
3e5b5ec122 Encourage model to read webpages more often after online search
Previously model would rarely read webpages after webpage search. Need
the model to webpages more regularly for deeper research and to stop
getting stuck in repetitive online search loops
2024-10-26 10:49:09 -07:00
Debanjum
bf96d81943 Format online results as YAML to pass it in more readable form to model
Previous passing of online results as json dump in prompts was less
readable for humans, and I'm guessing less readable for
models (trained on human data) as well?
2024-10-26 10:49:09 -07:00
Debanjum
3e97ebf0c7 Unescape special characters in prompt traces for better readability 2024-10-26 10:49:09 -07:00
Debanjum
8af9dc3ee1 Unescape special characters in prompt traces for better readability 2024-10-26 10:45:42 -07:00
Debanjum Singh Solanky
0f3927e810 Send gathered references to client after code results calculated 2024-10-26 05:59:10 -07:00
Debanjum Singh Solanky
f04f871a72 Merge branch 'add-prompt-tracer-for-observability' of github.com:khoj-ai/khoj into features/advanced-reasoning
- Start from this branches src/khoj/routers/api_chat.py
    Add tracer to all old and new chat actors that don't have it set
    when they are called.
  - Update the new chat actors like apick next tool etc to use tracer too
2024-10-26 05:56:13 -07:00
Debanjum Singh Solanky
ddc6ccde2d Merge branch 'master' into features/advanced-reasoning
- Conflicts:
  Combine both sides of the conflict in all 3 files below
  - src/khoj/processor/conversation/utils.py
  - src/khoj/routers/helpers.py
  - src/khoj/utils/helpers.py
2024-10-26 05:15:51 -07:00
Debanjum Singh Solanky
ea0712424b Commit conversation traces using user, chat, message branch hierarchy
- Message train of thought forks and merges from its conversation branch
- Conversation branches from user branch
- User branches from root commit on the main branch

- Weave chat tracer metadata from api endpoint through all chat actors
  and commit it to the prompt trace
2024-10-26 05:08:47 -07:00
Debanjum Singh Solanky
a3022b7556 Allow Offline Chat model calling functions to save conversation traces 2024-10-26 05:08:47 -07:00
Debanjum Singh Solanky
eb6424f14d Allow Anthropic API calling functions to save conversation traces 2024-10-26 05:08:47 -07:00
Debanjum Singh Solanky
6fcd6a5659 Allow Gemini API calling functions to save conversation traces 2024-10-26 05:08:47 -07:00
Debanjum Singh Solanky
384f394336 Allow OpenAI API calling functions to save conversation traces 2024-10-26 04:59:21 -07:00
Debanjum Singh Solanky
10c8fd3b2a Save conversation traces to git for visualization 2024-10-26 04:59:19 -07:00
sabaimran
7e0a692d16 Release Khoj version 1.27.1 2024-10-25 15:23:07 -07:00
sabaimran
b257fa1884 Add a None check before doing a DT comparison when getting subscription type 2024-10-25 15:22:48 -07:00
sabaimran
0f6f282c30 Release Khoj version 1.27.0 2024-10-25 14:11:14 -07:00
sabaimran
479e156168 Add to the ConversationCommand.Image description to LLM 2024-10-25 09:14:32 -07:00
sabaimran
a11b5293fb Add uploaded images to research mode, code slash command, include code references 2024-10-24 23:56:24 -07:00
sabaimran
5acf40c440 Clean up summarization code paths
Use assumption of summarization response being a str
2024-10-24 23:56:24 -07:00
sabaimran
12b32a3d04 Resolve merge conflicts 2024-10-24 23:43:55 -07:00
Debanjum
adee5a3e20 Give Vision to Anthropic models in Khoj (#948)
### Major
- Give Vision to Anthropic models in Khoj

### Minor
- Reuse logic to format messages for chat with anthropic models
- Make the get image from url function more versatile and reusable
- Encourage output mode chat actor to output only json and nothing else
2024-10-24 18:02:38 -07:00
Debanjum Singh Solanky
01d740debd Return typed image from image_with_url function for readability 2024-10-24 17:58:46 -07:00
Debanjum Singh Solanky
37317e321d Dedupe user location passed in image, diagram generation prompts 2024-10-24 01:03:29 -07:00
Debanjum Singh Solanky
2a32836d1a Log more descriptive error when image gen fails with Replicate 2024-10-24 01:03:29 -07:00
sabaimran
30f9225021 Merge branch 'master' of github.com:khoj-ai/khoj into features/advanced-reasoning 2024-10-23 19:15:51 -07:00
sabaimran
5120597d4e Remove user customized search model (#946)
- Use a single standard search model across the server. There's diminishing benefits for having multiple user-customizable search models. 
- We may want to add server-level customization for specific tasks
- Store the search model used to generate a given entry on the `Entry` object
- Remove user-facing APIs and view
- Add a management command for migrating the default search model on the server

In a future PR (after running the migration), we'll also remove the `UserSearchModelConfig`
2024-10-23 17:38:37 -07:00
Debanjum Singh Solanky
8d588e0765 Encourage output mode chat actor to output only json and nothing else
Latest claude model wanted to say more than just give the json output.
The updated prompt encourages the model to ouput just json. This is
similar to what is already being done for other prompts
2024-10-23 17:19:21 -07:00
Debanjum Singh Solanky
abad5348a0 Give Vision to Anthropic models in Khoj 2024-10-23 17:19:21 -07:00
Debanjum Singh Solanky
6fd50a5956 Reuse logic to format messages for chat with anthropic models 2024-10-23 17:19:21 -07:00
Debanjum Singh Solanky
82eac5a043 Make the get image from url function more versatile and reusable
It was previously added under the google utils. Now it can be used by
other conversation processors as well.

The updated function
- can get both base64 encoded and PIL formatted images from url
- will return the media type of the image as well in response
2024-10-23 17:19:20 -07:00
sabaimran
f3ce47b445 Create explicit flow to enable the free trial (#944)
* Create explicit flow to enable the free trial

The current design is confusing. It obfuscates the fact that the user is on a free trial. This design will make the opt-in explicit and more intuitive.

* Use the Subscription Type enum instead of hardcoded strings everywhere

* Use length of free trial in the frontend code as well
2024-10-23 15:29:23 -07:00
Debanjum Singh Solanky
bc059eeb0b Merge branch 'master' into put-retrieved-context-in-separate-chatml-message 2024-10-23 12:55:18 -07:00
Debanjum Singh Solanky
3b978b9b67 Fix chat history construction when generating chatml msgs with context 2024-10-23 12:55:12 -07:00
sabaimran
c5e91c346a Fix Docker desktop link for Linux 2024-10-23 11:24:54 -07:00
Debanjum Singh Solanky
9f2c02d9f7 Chat with the default agent by default from web app home
Had temporarily updated the default selected agent to last used.
Revert for now as
1. The previous logic was buggy. It didn't select the default agent
   even when the last used agent was the default agent. Which would
   require more work.
2. It maybe too early anyway to set the default agent to last used.
2024-10-23 03:43:57 -07:00
Debanjum Singh Solanky
218946edda Fix copying message with user images on web app
Adding div elements to message to render degraded text copied to
clipboard for messages with user uploaded images.

This change fixes that by separating message to render from message
for clipboard. It ensures differently formatted forms of the user
images are added to the two to allow proper rendering while still
having decently formatted text copied to clipboard
2024-10-23 03:41:25 -07:00
Debanjum Singh Solanky
7d9a06c8ab Merge branch 'master' into put-retrieved-context-in-separate-chatml-message 2024-10-23 00:13:38 -07:00
sabaimran
7c29af9745 Add link to self-hosted admin page and add docs for building front-end assets. Close #901 2024-10-22 22:42:27 -07:00
Debanjum Singh Solanky
2a50694089 Allow typing multi-line queries from a phone with Enter key
Add newline instead of sending message when hit Enter key on mobile
displays. As on phones shift key doesn't exist and send button is easily
clickable.

Limit hitting Enter key to send message to computers = larger display
= expected to have full fledged keyboards.
2024-10-22 21:20:22 -07:00
Debanjum Singh Solanky
a134cd835c Focus on chat input area to enter text after file uploads on web app 2024-10-22 21:19:17 -07:00
Debanjum
c81e708833 Show all agents, smart sorted, in carousel on home screen of web app (#943)
## Overview
Allow quickly selecting, switching agents from agents pane on home page of web app

## Details
- Show all agents in carousel on home screen agent pane of web app
- Smart Sort
  1. Pin default agent as first for ease of access
  2. Show used agents by MRU for ease of access
  3. Shuffle unused agents for discoverability
- Select most recently used agent to chat with by default
- Push smart sort logic down to API
  - Common logic can be reused across clients
  - Agent sort was previously done in web app
- Focus on chat input on agent select
- Double click agent on home page to open edit agent card on agents page
2024-10-22 21:18:17 -07:00
Debanjum Singh Solanky
750fbce0c2 Merge branch 'master' into improve-agent-pane-on-home-screen 2024-10-22 20:05:29 -07:00
Debanjum Singh Solanky
3be505db48 Only show type of error when image generation fails to clients
Rather than showing raw error message from the underlying service as it
could contain sensitive information
2024-10-22 20:03:20 -07:00
Debanjum
c6f3253ebd Chat with Multiple Images. Support Vision with Gemini (#942)
## Overview
- Add vision support for Gemini models in Khoj
- Allow sharing multiple images as part of user query from the web app
- Handle multiple images shared in query to chat API
2024-10-22 19:59:18 -07:00
Debanjum Singh Solanky
b3fff43542 Sanitize user attached images. Constrain chat input width on home page
Set max combined images size to 20mb to allow multiple photos to be shared
2024-10-22 19:42:40 -07:00
Debanjum Singh Solanky
6c393800cc Merge branch 'master' into multi-image-chat-and-vision-for-gemini 2024-10-22 18:38:49 -07:00
Debanjum Singh Solanky
91bbd19333 Close the agent detail hover card when scroll on agent pane 2024-10-22 18:03:17 -07:00
Debanjum Singh Solanky
110c67f083 Improve agent pill, detail card styling. Handle null chatInputRef
- Remove border from agent detail hover card on home page
- Do not wrap long agent names in agent pills on home page
- Handle scenario where chatInputRef is null
2024-10-22 18:03:17 -07:00
Debanjum Singh Solanky
aca8bef024 Only use recent chat sessions for agent MRU. Handle null agent chats 2024-10-22 17:46:45 -07:00
sabaimran
0dad4212fa Generate dynamic diagrams (via Excalidraw) (#940)
Add support for generating dynamic diagrams in flow with Excalidraw (https://github.com/excalidraw/excalidraw). This happens in three steps:
1. Default information collection & intent determination step.
2. Improving the overall guidance of the prompt for generating a JSON, Excalidraw-compatible declaration.
3. Generation of the diagram to output to the final UI.

Add support in the web UI.
2024-10-22 16:13:46 -07:00
sabaimran
1e993d561b Release Khoj version 1.26.4 2024-10-22 13:50:08 -07:00
Debanjum Singh Solanky
e8fb79a369 Rate limit the count and total size of images shared via API 2024-10-22 04:37:54 -07:00
Debanjum Singh Solanky
39a613d3bc Fix up openai chat actor tests 2024-10-22 03:09:36 -07:00
Debanjum Singh Solanky
0847fb0102 Pass online context from chat history to chat model for response
Previously only notes context from chat history was included.
This change includes online context from chat history for model to use
for response generation.

This can reduce need for online lookups by reusing previous online
context for faster responses. But will increase overall response time
when not reusing past online context, as faster context buildup per
conversation.

Unsure if inclusion of context is preferrable. If not, both notes and
online context should be removed.
2024-10-22 03:09:36 -07:00
Debanjum Singh Solanky
0c52a1169a Put context into separate user message before sending to chat model
The document, online search context are now passed as separate user
messages to chat model, instead of being added to the final user message.

This will improve

- Models ability to differentiate data from user query.
  That should improve response quality and reduce prompt injection
  probability

- Make truncation logic simpler and more robust
  When context window hit, can simply pop messages to auto truncate
  context in order of context, user, assistant message for each
  conversation turn in history until reach current user query

  The complex, brittle logic to extract user query from context in
  last user message isn't required.

Marking the context message with assistant role doesn't translate well
across chat models. E.g
- Gemini can't handle consecutive messages by role = model well
- Claude will merge consecutive messages by same role. In current
  message ordering the context message will result get merged into the
  previous assistant response. And if move context message after user
  query. The truncation logic will have to hop and skip while doing
  deletions
- GPT seems to handle consecutive roles of any type fine

Using context role = user generalizes better across chat models for
now and aligns with previous behavior.
2024-10-22 03:09:36 -07:00
Debanjum Singh Solanky
7ac241b766 Improve format of notes, online context passed to chat models in prompt
Improve separation of note snippets and show its origin file in notes
prompt to have more readable, contextualized text shared with model.

Previously the references dict was being directly passed as a string.
The documents don't look well formatted and are less intelligible.

- Passing file path along with notes snippets will help contextualize
  the notes better.
- Better formatting should help with making notes more readable by the
  chat model.
2024-10-22 03:09:36 -07:00
sabaimran
892040972f Replace user_id with server_id in telemetry 2024-10-21 20:47:52 -07:00
sabaimran
db959a504d Fix the version of pymupdf to avert build errors 2024-10-21 12:56:51 -07:00
sabaimran
21e69b506d Release Khoj version 1.26.3 2024-10-21 08:19:05 -07:00
Debanjum Singh Solanky
9b554feb91 Show agent details card on hover on agent pill on web app home page
- Double click on agent to open edit agent card
- Focus on chat input pane when agent selected/clicked
  for quick, smooth agent switch and message flow
- Hover on agent to see agent detail card on non-mobile displays
  - Use debounce to only show when hover on card for a bit
2024-10-21 00:08:01 -07:00
Debanjum Singh Solanky
220ff1df62 Set chatInputArea forward ref from parent components for control 2024-10-21 00:02:48 -07:00
Debanjum Singh Solanky
54b92eaf73 Extract isUserSubscribed check from Agents page to make it resusable 2024-10-20 23:31:48 -07:00
Debanjum Singh Solanky
bdbe8f003e Move agent details and edit card out into reusable components on web app 2024-10-20 23:31:47 -07:00
sabaimran
ad197be70c Fix PDFs unit test, skip OCR 2024-10-20 22:25:41 -07:00
sabaimran
59fec37943 Improve agents management, and limit agents view to private and official agents
- Default to None for the input_tools and output_modes so that they can be managed in the admin panel
- Hold off on showing off all Public Agents until we have a better experience for user profiles etc.
2024-10-20 22:24:51 -07:00
sabaimran
a979457442 Add unit tests for agents
- Add permutations of testing for with, without knowledge base. Private, public, different users.
2024-10-20 20:04:50 -07:00
sabaimran
fc70f25583 Release Khoj version 1.26.2 2024-10-20 18:03:36 -07:00
sabaimran
046de57571 Improve error handling when documents not searched with stack trace
- Stop extract OCR content from PDFs
- Only use agent knowledge base when user not provided
2024-10-20 18:03:14 -07:00
sabaimran
2b68d61fef Release Khoj version 1.26.1 2024-10-20 16:21:51 -07:00
Debanjum Singh Solanky
5fca41cc29 Show agents sorted by mru, Select mru agent by default on web app
Have get agents API return agents ordered intelligently
- Put the default agent first
- Sort used agents by most recently chatted with agent for ease of access
- Randomly shuffle the remaining unused agents for discoverability
2024-10-20 15:21:25 -07:00
Debanjum Singh Solanky
a6bfdbdbfe Show all agents in carousel on home screen agent pane of web app
This change wraps the agent pane in a scroll area with all agents shown.
It allows selecting an agent to chat with directly from the home
screen without breaking flow and having to jump to the agents page.

The previous flow was not convenient to quickly and consistently start
chat with one of your standard agents.

This was because a random subet of agents were shown on the home page.
To start chat with an agent not shown on home screen load you had to
open the agents page and initiate the conversation from there.
2024-10-20 15:21:25 -07:00
Debanjum Singh Solanky
9ffd726799 Allow making sync api requests with body from khoj.el 2024-10-20 15:16:40 -07:00
Debanjum Singh Solanky
ac51920859 Start conversation with Agents from within Emacs
Exposes a transient switch with available agents as selectable options
in the Khoj chat sub-menu.

Currently shows agent slugs instead of agent names as options. This
isn't the cleanest but gets the job done for now.

Only new conversations with a different agent can be started. Existing
conversations will continue with the original agent it was created with.
The ability to switch the conversation's agent doesn't exist on the
server yet.
2024-10-20 15:16:40 -07:00
Debanjum Singh Solanky
7646ac6779 Style user attached images as carousel on chat input area of web app 2024-10-20 00:40:08 -07:00
sabaimran
5d5bea6a5f Ensure images are reset after messages processed 2024-10-19 22:02:06 -07:00
sabaimran
1ad6e1749f Move window redirect to after relevant data is dropped in localStorage on the homage page
One limitation of this methodology is that localStorage has a limit in how much data it can take. Should add more graceful error handling here as well.
2024-10-19 20:36:13 -07:00
sabaimran
cb6b3ec1e9 Improve mode description given to LLM when determining how to respond.
Currently experiencing difficulty instruction following when an image is shared. It's more likely to try and output an image. Update to make a clearer distinction.
2024-10-19 20:35:32 -07:00
sabaimran
545259e308 Remove unused icons in chatInputArea 2024-10-19 16:54:21 -07:00
Debanjum Singh Solanky
3cc1426edf Style user attached images with fixed height, in a single row on web app 2024-10-19 16:48:36 -07:00
Debanjum Singh Solanky
58a331227d Display the attached images inside the chat input area on the web app
- Put the attached images display div inside the same parent div as
  the text area
- Keep the attachment, microphone/send message buttons aligned with
  the text area. So the attached images just show up at the top of the
  text area but everything else stays at the same horizontal height as
  before.

- This improves the UX by
  - Ensuring that the attached images do not obscure the agents pane
    above the chat input area
  - The attached images visually look like they are inside the actual
    input area, rather than floating above it. So the visual aligns
    with the semantics
2024-10-19 16:29:45 -07:00
Debanjum Singh Solanky
3e39fac455 Add vision support for Gemini models in Khoj 2024-10-19 15:47:03 -07:00
Debanjum Singh Solanky
0d6a54c10f Allow sharing multiple images as part of user query from the web app
Previously the web app only expected a single image to be shared by
the user as part of their query.

This change allows sharing multiple images from the web app.

Closes #921
2024-10-19 15:47:03 -07:00
Debanjum Singh Solanky
e2abc1a257 Handle multiple images shared in query to chat API
Previously Khoj could respond to a single shared image at a time.

This changes updates the chat API to accept multiple images shared by
the user and send it to the appropriate chat actors including the
openai response generation chat actor for getting an image aware
response
2024-10-19 14:53:33 -07:00
Debanjum Singh Solanky
d55cba8627 Pass user query for chat response when document lookup fails
Recent changes made Khoj try respond even when document lookup fails.
This change missed handling downstream effects of a failed document
lookup, as the defiltered_query was null and so the text response
didn't have the user query to respond to.

This code initializes defiltered_query to original user query to
handle that.

Also response_type wasn't being passed via
send_message_to_model_wrapper_sync unlike in the async scenario
2024-10-19 14:32:19 -07:00
Debanjum Singh Solanky
a4e6e1d5e8 Share webp images from web, desktop, obsidian app to chat with 2024-10-19 14:32:17 -07:00
sabaimran
dbd9a945b0 Re-evaluate agent private/public filtering after authenticateddata is retrieved. Update selectedAgent check logic to reflect. 2024-10-18 09:31:56 -07:00
Debanjum Singh Solanky
35015e720e Release Khoj version 1.26.0 2024-10-17 18:25:53 -07:00
Debanjum Singh Solanky
f0dcfe4777 Explicitly ask Gemini models to format their response with markdown
Otherwise it can get confused by the format of the passed context (e.g
respond in org-mode if context contains org-mode notes)
2024-10-17 18:12:47 -07:00
Debanjum
7fb4c2939d Make Chat and Online Search Resilient and Faster (#936)
## Overview
### New
- Support using Firecrawl(https://firecrawl.dev) to read web pages
- Add, switch and re-prioritize web page reader(s) to use via the admin panel

### Speed
- Improve response speed by aggregating web page read, extract queries to run only once for each web page

### Response Resilience
- Fallback through enabled web page readers until web page read
- Enable reading web pages on the internal network for self-hosted Khoj running in anonymous mode
- Try respond even if web search, web page read fails during chat
- Try respond even if document search via inference endpoint fails

### Fix
- Return data sources to use if exception in data source chat actor

## Details
### Configure web page readers to use
 - Only the web scraper set in Server Chat Settings via the Django admin panel, if set
 - Otherwise use the web scrapers added via the Django admin panel (in order of priority), if set
 - Otherwise, use all the web scrapers enabled by settings API keys via environment variables (e.g `FIRECRAWL_API_KEY', `JINA_API_KEY' env vars set), if set
 - Otherwise, use Jina to web scrape if no scrapers explicitly defined
 
For self-hosted setups running in anonymous-mode, the ability to directly read webpages is also enabled by default. This is especially useful for reading webpages in your internal network that the other web page readers will not be able to access.

### Aggregate webpage extract queries to run once for each distinct web page

Previously, we'd run separate webpage read and extract relevant
content pipes for each distinct (query, url) pair.

Now we aggregate all queries for each url to extract information from
and run the webpage read and extract relevant content pipes once for
each distinct URL.

Even though the webpage content extraction pipes were previously being
run in parallel. They increased the response time by
1. adding more ~duplicate context for the response generation step to read
2. being more susceptible to variability in web page read latency of the parallel jobs

The aggregated retrieval of context for all queries for a given
webpage could result in some hit to context quality. But it should
improve and reduce variability in response time, quality and costs. 

This should especially help with speed and quality of online search 
for offline or low context chat models.
2024-10-17 17:57:44 -07:00
Debanjum Singh Solanky
2c20f49bc5 Return enabled scrapers as WebScraper objects for more ergonomic code 2024-10-17 17:44:09 -07:00
Debanjum Singh Solanky
0db52786ed Make web scraper priority configurable via admin panel
- Simplifies changing order in which web scrapers are invoked to read
  web page by just changing their priority number on the admin panel.
  Previously you'd have to delete/, re-add the scrapers to change
  their priority.

- Add help text for each scraper field to ease admin setup experience

- Friendlier env var to use Firecrawl's LLM to extract content

- Remove use of separate friendly name for scraper types.
  Reuse actual name and just make actual name better
2024-10-17 17:42:42 -07:00
Debanjum Singh Solanky
20b6f0c2f4 Access internal links directly via a simple get request
The other webpage scrapers will not work for internal webpages. Try
access those urls directly if they are visible to the Khoj server over
the network.

Only enable this by default for self-hosted, single user setups.
Otherwise ability to scan internal network would be a liability!

For use-cases where it makes sense, the Khoj server admin can
explicitly add the direct webpage scraper via the admin panel
2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
d94abba2dc Fallback through enabled scrapers to reduce web page read failures
- Set up scrapers via API keys, explicitly adding them via admin panel
  or enabling only a single scraper to use via server chat settings.

- Use validation to ensure only valid scrapers added via admin panel
  Example API key is present for scrapers that require it etc.

- Modularize the read webpage functions to take api key, url as args
  Removes dependence on constants loaded in online_search. Functions
  are now mostly self contained

- Improve ability to read webpages by using the speed, success rate of
  different scrapers. Optimal configuration needs to be discovered
2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
11c64791aa Allow changing perf timer log level. Info log time for webpage read 2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
c841abe13f Change webpage scraper to use via server admin panel 2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
e47922e53a Aggregate webpage extract queries to run once for each distinct webpage
This should reduce webpage read and response generation time.

Previously, we'd run separate webpage read and extract relevant
content pipes for each distinct (query, url) pair.

Now we aggregate all queries for each url to extract information from
and run the webpage read and extract relevant content pipes once for
each distinct url.

Even though the webpage content extraction pipes were previously being
in parallel. They increased response time by
1. adding more context for the response generation chat actor to
   respond from
2. and by being more susceptible to page read and extract latencies of
   the parallel jobs

The aggregated retrieval of context for all queries for a given
webpage could result in some hit to context quality. But it should
improve and reduce variability in response time, quality and costs.
2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
98f99fa6f8 Allow using Firecrawl to extract web page content
Set the FIRECRAWL_TO_EXTRACT environment variable to true to have
Firecrawl scrape and extract content from webpage using their LLM

This could be faster, not sure about quality as LLM used is obfuscated
2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
993fd7cd2b Support using Firecrawl to read webpages
Firecrawl is open-source, self-hostable with a default hosted service
provided, similar to Jina.ai. So it can be
1. Self-hosted as part of a private Khoj cloud deployment
2. Used directly by getting an API key from the Firecrawl.dev service

This is as an alternative to Olostep and Jina.ai for reading webpages.
2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
731ea3779e Return data sources to use if exception in data source chat actor
Previously no value was returned if an exception got triggered when
collecting information sources to search.
2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
a932564169 Try respond even if web search, webpage read fails during chat
Khoj shouldn't refuse to respond to user if web lookups fail.
It should transparently mention that online search etc. failed.
But try respond as best as it can without those references

This change ensures a response to the users query is attempted even
when web info retrieval fails.
2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
1b04b801c6 Try respond even if document search via inference endpoint fails
The huggingface endpoint can be flaky. Khoj shouldn't refuse to
respond to user if document search fails.
It should transparently mention that document lookup failed.
But try respond as best as it can without the document references

This changes provides graceful failover when inference endpoint
requests fail either when encoding query or reranking retrieved docs
2024-10-17 17:40:49 -07:00
Debanjum Singh Solanky
9affeb9e85 Fix to log the client app calling the chat API
- Remove unused subscribed variable from the chat API
- Unexpectedly dropped client app logging when migrated API chat to do
  advanced streaming in july
2024-10-17 15:24:43 -07:00
Debanjum Singh Solanky
c6c48cfc18 Fix arg to generate_summary_from_file and type of this_iteration 2024-10-17 13:38:48 -07:00
Debanjum Singh Solanky
884fe42602 Allow automation as an output mode supported by custom agents 2024-10-17 11:58:52 -07:00
Debanjum Singh Solanky
c5e19b37ef Use Khoj icons. Add automation & improve agent text on web login page 2024-10-17 11:58:52 -07:00
Debanjum Singh Solanky
42acc324dc Handle correctly setting file filters as array when API call fails
- Only set addedFiles to selectedFiles when selectedFiles is an array
- Only set seleectedFiles, addedFiles to API response json when
  response succeeded. Previously we set it to response json
  on errors as well. This made the variables into json objects instead
  of arrays on API call failure
- Check if selectedFiles, addedFiles are arrays before running
  operations on them. Previously the addedFiles.includes was where the
  code would fail
2024-10-17 11:58:52 -07:00
Debanjum Singh Solanky
7ebfc24a96 Upgrade Django version used by Khoj server 2024-10-17 11:58:52 -07:00
Debanjum Singh Solanky
ea59dde4a0 Upgrade documentation website dependencies 2024-10-17 11:58:52 -07:00
sabaimran
07ab8ab931 Update handling of gemini response with new API changes. Per documentation:
finish_reason (google.ai.generativelanguage_v1beta.types.Candidate.FinishReason):
            Optional. Output only. The reason why the
            model stopped generating tokens.
            If empty, the model has not stopped generating
            the tokens.
2024-10-17 09:00:01 -07:00
Rehan Daphedar
27835628e6 Fix typo in docs for error 400 fix when self-hosting (#938) 2024-10-16 23:15:43 -07:00
Debanjum Singh Solanky
19c65fb82b Show user uuid field in django admin panel 2024-10-15 17:59:12 -07:00
Debanjum Singh Solanky
6c5b362551 Remove deprecated GET chat API endpoint 2024-10-15 15:13:09 -07:00
Debanjum Singh Solanky
931c56182e Fix default chat model to use user model if no server chat model set
- Advanced chat model should also fallback to user chat model if set
- Get conversation config should falback to user chat model if set

These assume no server chat model settings is configured
2024-10-15 15:13:09 -07:00
Debanjum Singh Solanky
feb6d65ef8 Merge branch 'master' into features/advanced-reasoning 2024-10-15 09:37:56 -07:00
Debanjum Singh Solanky
336c6c3689 Show tool to use decision for next iteration in train of thought 2024-10-15 01:12:18 -07:00
Debanjum Singh Solanky
81fb65fa0a Return data sources to use if exception in data source chat actor
Previously no value was returned if an exception got triggered when
collecting information sources to search.
2024-10-14 18:20:20 -07:00
Debanjum Singh Solanky
3c93f07b3f Try respond even if web search, webpage read fails during chat
Khoj shouldn't refuse to respond to user if web lookups fail.
It should transparently mention that online search etc. failed.
But try respond as best as it can without those references

This change ensures a response to the users query is attempted even
when web info retrieval fails.
2024-10-14 18:13:26 -07:00
Debanjum Singh Solanky
07ab7ebf07 Try respond even if document search via inference endpoint fails
The huggingface endpoint can be flaky. Khoj shouldn't refuse to
respond to user if document search fails.
It should transparently mention that document lookup failed.
But try respond as best as it can without the document references

This changes provides graceful failover when inference endpoint
requests fail either when encoding query or reranking retrieved docs
2024-10-14 18:13:26 -07:00
Debanjum Singh Solanky
d6206aa80c Remove deprecated GET chat API endpoint 2024-10-14 18:13:26 -07:00
Debanjum Singh Solanky
263eee4351 Fix default chat model to use user model if no server chat model set
- Advanced chat model should also fallback to user chat model if set
- Get conversation config should falback to user chat model if set

These assume no server chat model settings is configured
2024-10-14 18:13:26 -07:00
sabaimran
81aa1b5589 Update some edge cases and usability of create agent flow
- Use the slug to determine which agent to PATCH
- Make the agent creation form multi-step to streamline the process
2024-10-14 14:07:31 -07:00
Debanjum Singh Solanky
abcd11cfc0 Merge branch 'master' into features/advanced-reasoning 2024-10-13 03:06:23 -07:00
Debanjum Singh Solanky
9356e66b94 Fix default chat model to use user model if no server chat model set
- Advanced chat model should also fallback to user chat model if set
- Get conversation config should falback to user chat model if set

These assume no server chat model settings is configured
2024-10-13 03:02:29 -07:00
Debanjum Singh Solanky
9314f0a398 Fix default chat configs to use user model if no server chat model set
Post merge cleanup in advanced reasoning to fallback to user chat
model if no server chat model defined for advanced and default
2024-10-13 02:59:10 -07:00
Debanjum Singh Solanky
8ff13e4cf6 Update readme. Mention new capabilities 2024-10-13 01:30:53 -07:00
Debanjum Singh Solanky
a2200466b7 Merge branch 'master' into features/advanced-reasoning 2024-10-12 21:01:22 -07:00
Debanjum
c66c571396 Simplify switching chat model when self-hosting (#934)
# Overview
- Default to use user chat models for train of thought when no server chat settings created by admins
- Default to not create server chat settings on first run

# Details
This change simplifies switching chat models for self-hosted setups 
by just changing the chat model on the user settings page.

It falls back to use the user chat model for train of thought 
if server chat settings have not been created on the admin panel.

Server chat settings, when set, controls the chat model used 
for Khoj's train of thought and the default user chat model.

Previously a self-hosted user had to update
1. the server chat settings in the admin panel and
2. their own user chat model in the user settings panel

to completely switch to a different chat model 
for both train of thought & response generation respectively

You can still set server chat settings via the admin panel 
to use a different chat model for train of thought vs response generation. 
But this is only useful for advanced, multi-user setups.
2024-10-12 19:58:05 -07:00
Debanjum Singh Solanky
90888a1099 Log when new user created via magic link or whatsapp as well 2024-10-12 19:56:01 -07:00
Debanjum Singh Solanky
8222c6629d Remove unused subscribed argument to read_webpage function 2024-10-12 10:45:39 -07:00
Debanjum Singh Solanky
9daaae0fdb Render inline any image files output by code in message
Update regex to also include any links to code generated images that
aren't explicitly meant to be displayed inline. This allows folks to
download the image (unlike the fake link that doesn't work created by
model)
2024-10-12 10:34:57 -07:00
Debanjum Singh Solanky
20d495c43a Update the iterative chat director prompt to generalize across chat models
These prompts work across o1 and standard openai model. Works with
anthropic and google models as well
2024-10-12 10:34:57 -07:00
sabaimran
eb4d598d0f Eliminate the drawer component from the Agents view 2024-10-10 20:40:59 -07:00
sabaimran
0a1c3e4f41 Release Khoj version 1.25.0 2024-10-10 18:07:30 -07:00
sabaimran
01a58b71a5 Skip image, code generation if in research mode 2024-10-10 18:06:29 -07:00
Debanjum Singh Solanky
1b13d069f5 Pass data collected from various sources to code tool in normal flow too 2024-10-10 05:19:27 -07:00
Debanjum Singh Solanky
f462d34547 Render images files output by code interpreter in message on web app 2024-10-10 05:17:53 -07:00
Debanjum Singh Solanky
564491e164 Extract date filters quoted with non-ascii quotes in query 2024-10-10 04:45:00 -07:00
Debanjum Singh Solanky
6a8fd9bf33 Reorder embeddings search arguments based on argument importance 2024-10-10 04:45:00 -07:00
Debanjum Singh Solanky
0eacc0b2b0 Use consistent name for user, planner to not miss current user query
Previously Khoj would start answering the previous query. This maybe
because the prompt uses User for prompt in chat history but was using
Q for current user prompt.
2024-10-10 04:45:00 -07:00
Debanjum Singh Solanky
284c8c331b Increase default max iterations for research chat director to 5 2024-10-10 04:45:00 -07:00
Debanjum Singh Solanky
1e390325d2 Let research chat director decide which webpage to read, if any
Make webpages to read automatically on search_online configurable via
a argument.

Set it to default to 1, so other callers of the function
are unaffected.

But iterative chat director can still decide which, if
any, webpages to read based on the online search it performs
2024-10-10 04:45:00 -07:00
Debanjum Singh Solanky
5a699a52d2 Improve webpage summarization prompt to better extract links, excerpts
This change allows the iterative director to dive deeper into its
research as the data extracted contains relevant links from the webpage

Previous summarization prompt didn't extract relevant links from the
webpage which limited further explorations from webpages
2024-10-10 04:45:00 -07:00
Debanjum Singh Solanky
61df1d5db8 Pass previous iteration results to code interpreter chat actors
This improves the code interpreter chat actors abilitiy to generate
code with data collected during the previous iterations
2024-10-10 04:45:00 -07:00
Debanjum Singh Solanky
9e7025b330 Set python interpret sandbox url via environment variable 2024-10-10 04:45:00 -07:00
Debanjum Singh Solanky
2dc5804571 Extract defilter query into conversation utils for reuse 2024-10-10 04:45:00 -07:00
sabaimran
e69a8382f2 Add a code icon for code-related train of thought 2024-10-09 23:56:57 -07:00
sabaimran
536422a40c Include code snippets in the reference panel 2024-10-09 23:54:11 -07:00
Debanjum Singh Solanky
8d33c764b7 Allow iterative chat director to use python interpreter as a tool 2024-10-09 23:38:20 -07:00
Debanjum Singh Solanky
b373073f47 Show executed code in web app chat message references 2024-10-09 22:13:18 -07:00
Debanjum Singh Solanky
a98f97ed5e Refactor Run Code tool into separate module and modularize code functions
Move construct_chat_history and ChatEvent enum into conversation.utils
and move send_message_to_model_wrapper to conversation.helper to
modularize code. And start thinning out the bloated routers.helper

- conversation.util components are shared functions that conversation
  child packages can use.
- conversation.helper components can't be imported by conversation
  packages but it can use these child packages

This division allows better modularity while avoiding circular
import dependencies
2024-10-09 22:13:17 -07:00
Debanjum Singh Solanky
8044733201 Give Khoj ability to run python code as a tool triggered via chat API
Create python code executing chat actor
- The chat actor generate python code within sandbox constraints
- Run the generated python code in the cohere terrarium, pyodide
  based sandbox accessible at sandbox url
2024-10-09 21:37:22 -07:00
Debanjum Singh Solanky
4d33239af6 Improve prompts for the iterative chat director 2024-10-09 21:23:18 -07:00
Debanjum Singh Solanky
6ad85e2275 Fix to continue showing retrieved documents in train of thought 2024-10-09 21:20:22 -07:00
sabaimran
a6f6e4f418 Fix notes references and passage of user query in the chat flow 2024-10-09 20:34:20 -07:00
Debanjum Singh Solanky
ec248efd31 Allow iterative chat director to do notes search 2024-10-09 19:04:59 -07:00
Debanjum Singh Solanky
a6905a9f0c Pass background context to iterating chat director 2024-10-09 19:04:59 -07:00
sabaimran
028b6e6379 Fix yield for scraping direct web page 2024-10-09 18:14:08 -07:00
sabaimran
717d9da8d8 Handle when summarize result is not present, rename variable in for loop from query 2024-10-09 17:57:08 -07:00
sabaimran
03544efde2 Ignore typing of the result dict for online, web page scrape 2024-10-09 17:48:24 -07:00
sabaimran
ab81b01fcb Fix typing of direct_web_pages and remove the deprecated chat API 2024-10-09 17:46:28 -07:00
sabaimran
5b8d663cf1 Add intermediate summarization of results when planning with o1 2024-10-09 17:40:56 -07:00
sabaimran
7b288a1179 Clean up the function planning prompt a little bit 2024-10-09 16:59:20 -07:00
sabaimran
f71e4969d3 Skip summarize while it's broken, and snip some other parts of the workflow while under construction 2024-10-09 16:40:06 -07:00
sabaimran
f7e6f99a32 add typing for extract document references 2024-10-09 16:05:34 -07:00
sabaimran
6960fb097c update types of prev iterations response 2024-10-09 16:04:39 -07:00
sabaimran
4978360852 Fix type of previous_iterations 2024-10-09 16:02:41 -07:00
sabaimran
46ef205a75 Add additional type annotations for compiled_references et al 2024-10-09 16:01:52 -07:00
sabaimran
4fbaef10e9 Correct usage of the summarize function 2024-10-09 15:58:05 -07:00
sabaimran
c91678078d Correct the usage of query passed to summarize function 2024-10-09 15:55:55 -07:00
sabaimran
f867d5ed72 Working prototype of meta-level chain of reasoning and execution
- Create a more dynamic reasoning agent that can evaluate information and understand what it doesn't know, making moves to get that information
- Lots of hacks and code that needs to be reversed later on before submission
2024-10-09 15:54:25 -07:00
Debanjum
00546c1a63 Fix link to llama-cpp-python setup docs 2024-10-09 01:30:33 -07:00
Debanjum Singh Solanky
05fb0f14d3 Use user chat models for train of thought when no server chat settings
Update chat actors to use user's chat model for train of thought. This
requires passing the user info as argument to all the chat actors.

Whether the user is subscribed or not can be inferred from the user
info being passed, so it doesn't need to be passed as a separate
argument to chat actor functions

Let send_message_to_model function infer chat model instead of passing
it as an argument from some chat actors. Better if this logic can be
done in a single place.
2024-10-09 00:07:08 -07:00
Debanjum Singh Solanky
ec0c79217f Do not set server chat settings on first run
Server chat settings can be set for advanced self-hosted or multi-user
cloud setups. They are not necessary anymore as we fallback to use the
users chat model for train of thought now
2024-10-09 00:07:08 -07:00
Debanjum Singh Solanky
a9009ea774 Default to use user chat model if server chat settings not defined
Fallback to use user chat model for train of thought if server chat
settings not defined.

This simplifies switching chat models for single-user, self-hosted
setups by just changing the chat model on the user settings page.

Server chat settings, when set, controls the default user chat model
and the chat model that is used for Khoj's train of thought.

Previously a self-hosted user had to update both the server chat
settings in the admin panel and their own user chat model in the user
settings panel to explicitly switch to a different chat model (i.e to
switch to a new model for both train of thought & response generation)

You can still set server chat settings to use a different chat
model for train of thought and response generation. But this is only
necessary for advanced self-hosted or cloud hosted setups of Khoj.
2024-10-09 00:07:08 -07:00
Debanjum Singh Solanky
9a056383e0 Reduce size of start chat and edit buttons on agent card in web app 2024-10-09 00:00:32 -07:00
Debanjum Singh Solanky
dc7f22f76c Mention no. of docs in agents knowledge base in its badge hover text 2024-10-08 23:51:00 -07:00
Debanjum Singh Solanky
13fb22f7e7 Update agent form data shown in edit card after save operaton on web app
Previously you had to refresh the page to see the updated data on
reopening the agents edit card after a save operation.

Now you see the latest saved agent data on reopening the agents edit
card. This should avoid confusion on whether the data was saved
correctly
2024-10-08 23:26:04 -07:00
Debanjum Singh Solanky
dd770cf1b9 Start chat with public and protected agents when shared via link 2024-10-08 22:10:07 -07:00
Debanjum Singh Solanky
80212c50fd Use default agent in others chats with an agent if agent made private
If a public or protected agent is made private. Other users who were
having conversation with that agent will have to carry on their
conversation using default agent instead
2024-10-08 22:08:38 -07:00
Debanjum Singh Solanky
d628f89ce9 Prefetch agents related database models 2024-10-08 21:59:15 -07:00
Debanjum Singh Solanky
8de67c5d4d Fallback to use general command if no tool selected by agent 2024-10-08 19:48:02 -07:00
Debanjum Singh Solanky
b80c4bcfdd Improve agent command descriptions 2024-10-08 19:47:51 -07:00
Debanjum Singh Solanky
67d0e59eac Pass chat history to the summarize chat actor 2024-10-08 18:44:52 -07:00
Debanjum Singh Solanky
7e3090060b Encourage Gemini to output more verbose responses 2024-10-08 18:41:43 -07:00
Debanjum Singh Solanky
bbbdba3093 Time embedding model load for better visibility into app startup time
Loading the embeddings model, even locally seems to be taking much
longer. Use timer to track visibility into embedding, cross-encoder
model load times
2024-10-08 18:41:43 -07:00
Debanjum Singh Solanky
516472a8d5 Switch default tokenizer to tiktoken as more widely used
The tiktoken BPE based tokenizers seem more widely used these days.

Fallback to gpt-4o tiktoken tokenizer to count tokens for context
stuffing
2024-10-08 18:41:43 -07:00
Debanjum Singh Solanky
2b8f7f3efb Reuse a single func to format conversation for Gemini
This deduplicates code and prevents logic from deviating across gemini
chat actors
2024-10-08 18:41:42 -07:00
Debanjum Singh Solanky
452e360175 Do not use max prompt size to limit Gemini max output tokens
We should start disambiguating the the max input from output size. Max
prompt size should only be used for the max input context to an LLM.

If required max_output_tokens should be set as a separate new field
2024-10-08 15:30:08 -07:00
Debanjum Singh Solanky
bdc36fec5d Remove unnecessary whitespace indent from personality context 2024-10-08 15:30:08 -07:00
sabaimran
3daa3c003d When tool selection is not done successfully with an agent, return all agent tools as options 2024-10-08 15:03:58 -07:00
sabaimran
ad716ca58d Delete associated entries with an agent when it is deleted 2024-10-08 15:00:21 -07:00
sabaimran
f7fc6dbdc8 Limit agent creation and modification to subscribed users 2024-10-08 14:59:57 -07:00
sabaimran
c7638a783e Dynamically update added files when upload in agent creation 2024-10-07 21:54:11 -07:00
sabaimran
e10a0571ff Only check the prompt safety if the agent is not private 2024-10-07 21:42:14 -07:00
sabaimran
f700d5bddb Add summarization capability with agent knowledge base 2024-10-07 21:20:23 -07:00
sabaimran
df3dc33e96 Show reference icon and domain side by side 2024-10-07 20:28:48 -07:00
sabaimran
59e55f981f Reset agent to default when continuing with deceased agent 2024-10-07 20:28:33 -07:00
sabaimran
874776024a Handle chat history rendering when agent is deceased 2024-10-07 20:28:10 -07:00
sabaimran
f232c2b059 Allow user to chat with agent knowledge base if general mode 2024-10-07 19:55:33 -07:00
sabaimran
c00654ae58 Update default agent settings 2024-10-07 18:11:24 -07:00
sabaimran
3d0e183bea Add more log lines when encountering rate limiting 2024-10-07 14:36:12 -07:00
sabaimran
e4a8a69bc8 Add a subtle check mark when the copy button is selected 2024-10-07 09:41:03 -07:00
sabaimran
405c047c0c Include agent personality through subtasks and support custom agents (#916)
Currently, the personality of the agent is only included in the final response that it returns to the user. Historically, this was because models were quite bad at navigating the additional context of personality, and there was a bias towards having more control over certain operations (e.g., tool selection, question extraction).

Going forward, it should be more approachable to have prompts included in the sub tasks that Khoj runs in order to response to a given query. Make this possible in this PR. This also sets us up for agent creation becoming available soon.

Create custom agents in #928

Agents are useful insofar as you can personalize them to fulfill specific subtasks you need to accomplish. In this PR, we add support for using custom agents that can be configured with a custom system prompt (aka persona) and knowledge base (from your own indexed documents). Once created, private agents can be accessible only to the creator, and protected agents can be accessible via a direct link.

Custom tool selection for agents in #930

Expose the functionality to select which tools a given agent has access to. By default, they have all. Can limit both information sources and output modes.
Add new tools to the agent modification form
2024-10-07 00:21:55 -07:00
sabaimran
c0193744f5 Update readme.md 2024-10-06 12:26:53 -07:00
sabaimran
d4ffeca90a Fix notion indexing with manually set token 2024-10-05 09:13:16 -07:00
sabaimran
29a422b6bc Remove the single dollar sign delimeters from katex rendering 2024-10-04 12:24:19 -07:00
Debanjum Singh Solanky
e217cb5840 Suggest notification type automation on Automation page of web app 2024-10-03 16:36:23 -07:00
Debanjum Singh Solanky
f626b34436 Update automation docs with screenshots 2024-10-03 16:36:05 -07:00
sabaimran
27c7e54695 Release Khoj version 1.24.1 2024-10-03 13:21:10 -07:00
Debanjum
4a1cb50da3 Make Online Search Location Aware (#929)
## Overview
Add user country code as context for doing online search with serper.dev API.
This should find more user relevant results from online searches by Khoj

## Details
### Major
- Default to using system clock to infer user timezone on js clients
- Infer country from timezone when only timezone received by chat API
- Localize online search results to user country when location available

### Minor
- Add `__str__` func to `LocationData` class to deduplicate location string generation
2024-10-03 12:33:47 -07:00
sabaimran
cb4052e333 Bump up rate limit for subscribed users and add an option to create new conversation in the POST request 2024-10-03 12:31:58 -07:00
sabaimran
7a5cd06162 Improve the login page (#931)
* Init version of improved login page
* Use split screen view, add a gradient
2024-10-02 14:26:46 -07:00
Debanjum Singh Solanky
591c582eeb Fix list references in use openai proxy docs 2024-09-30 23:21:33 -07:00
Debanjum Singh Solanky
852662f946 Use requestAnimationFrame for synced scroll on chat in web app
Make all the scroll actions just use requestAnimationFrame instead of
setTimeout. It better aligns with browser rendering loop, so better
for UX changes than setTimeout
2024-09-30 23:21:10 -07:00
sabaimran
57b4f844b7 Fail app start if initalization fails 2024-09-30 17:30:06 -07:00
Debanjum Singh Solanky
04aef362e2 Default to using system clock to infer user timezone on js clients
Using system clock to infer user timezone on clients makes Khoj
more robust to provide location aware responses.

Previously only ip based location was used to infer timezone via API.
This didn't provide any decent fallback when calls to ipapi failed or
Khoj was being run in offline mode
2024-09-30 07:08:12 -07:00
Debanjum Singh Solanky
344f3c60ba Infer country from timezone when only tz received by chat API
Timezone is easier to infer using clients system clock. This can be
used to infer user country name, country code, even if ip based
location cannot be inferred.

This makes using location data to contextualize Khoj's responses more
robust. For example, online search results are retrieved for user's
country, even if call to ipapi.co for ip based location fails
2024-09-30 07:08:11 -07:00
Debanjum Singh Solanky
1fed842fcc Localize online search results to user country when location available
Get country code to server chat api from i.p location check on clients.
Use country code to get country specific online search results via Serper.dev API
2024-09-30 07:08:11 -07:00
Debanjum Singh Solanky
eb86f6fc42 Add __str__ func to LocationData class to dedupe location string gen
Previously the location string from location data was being generated
wherever it was being used.

By adding a __str__ representation to LocationData class, we can
dedupe and simplify the code to get the location string
2024-09-30 07:08:11 -07:00
sabaimran
1dfc89e79f Store conversation ID for new conversations as a string, not UUID 2024-09-29 18:07:08 -07:00
sabaimran
d92a349292 Improve image generation tool description 2024-09-29 16:20:25 -07:00
Debanjum Singh Solanky
d21a4e73a0 Update docs to use new variables to sync files, directories from khoj.el 2024-09-29 14:03:06 -07:00
Debanjum Singh Solanky
d66a0ccfaa Update client setup docs with instructions for self-hosting users
Resolves #808
2024-09-29 13:58:02 -07:00
Debanjum Singh Solanky
dd44933515 Release Khoj version 1.24.0 2024-09-29 04:56:11 -07:00
Debanjum Singh Solanky
1e8ce52d98 Reduce size of Khoj Docker images by removing layers and caches
- Align Dockerfile and prod.Dockerfile code
- Reduce Docker image size by 25% by reducing Docker layers and
  removing package caches
2024-09-29 04:06:35 -07:00
Debanjum Singh Solanky
9b10b3e7a1 Remove unused langchain openai server dependency 2024-09-29 04:06:35 -07:00
Debanjum Singh Solanky
e767b6eba3 Update Documentation with flags to enable GPU on Khoj pip install
- Use tabs for GPU/CPU type khoj being install on
- Update CMAKE flags to use to install Khoj with correct GPU support
  Previous flags used DLLAMA, this has been updated to use DGGML now
  in llama.cpp
2024-09-29 04:06:35 -07:00
sabaimran
63a2b5b3c4 Remove tools cache in dockerize.yml workflow 2024-09-29 00:27:37 -07:00
Debanjum Singh Solanky
936bc64b82 Render images to take full width of chat message div
Remove unnecessary "Inferred Query" heading prefix to image generation prompt
used by Khoj. The inferred query in chat message has a heading of it's
own, so avoid two headings for the image prompt
2024-09-28 23:45:56 -07:00
Debanjum Singh Solanky
4efa7d4464 Upgrade the Next.js web app package dependency 2024-09-28 23:45:56 -07:00
Debanjum Singh Solanky
b3cb417796 Fix spelling of Manage Context in Side Panel of Web App 2024-09-28 23:45:56 -07:00
sabaimran
676ff5fa69 Fix setting title on new conversations, add the action menu 2024-09-28 23:43:27 -07:00
Shantanu Sakpal
65d5e03f7f Reduce tooltip popup delay duration for Create Agent button on Web app (#926)
The problem was the tool tip was visible on hover, but it was slow, so before the tool tip popped up, the user would click on the button and this stopped the tool tip from popping up.

So i reduced the popup delay to 10ms. now as soon as user hovers over the button, they will see that its a feature coming soon!
2024-09-28 23:01:40 -07:00
Shantanu Sakpal
be8de1a1bd Only Auto Scroll when at Page Bottom and Add Button to Scroll to Page Bottom on Web App (#923)
Improve Scrolling on Chat page of Web app

- Details
  1. Only auto scroll Khoj's streamed response when scroll is near bottom of page
      Allows scrolling to other messages in conversation while Khoj is formulating and streaming its response
  2. Add button to scroll to bottom of the chat page
  3. Scroll to most recent conversation turn on conversation first load
      It's a better default to anchor to most recent conversation turn (i.e most recent user message)
  4. Smooth scroll when Khoj's chat response is streamed
      Previously the scroll would jitter during response streaming
  5. Anchor scroll position when fetch and render older messages in conversation
      Allow users to keep their scroll position when older messages are fetched from server and rendered

Resolves #758
2024-09-28 22:54:34 -07:00
sabaimran
06777e1660 Convert the default conversation id to a uuid, plus other fixes (#918)
* Update the conversation_id primary key field to be a uuid

- update associated API endpoints
- this is to improve the overall application health, by obfuscating some information about the internal database
- conversation_id type is now implicitly a string, rather than an int
- ensure automations are also migrated in place, such that the conversation_ids they're pointing to are now mapped to the new IDs

* Update client-side API calls to correctly query with a string field

* Allow modifying of conversation properties from the chat title

* Improve drag and drop file experience for chat input area

* Use a phosphor icon for the copy to clipboard experience for code snippets

* Update conversation_id parameter to be a str type

* If django_apscheduler is not in the environment, skip the migration script

* Fix create automation flow by storing conversation id as string

The new UUID used for conversation id can't be directly serialized.
Convert to string for serializing it for later execution

---------

Co-authored-by: Debanjum Singh Solanky <debanjum@gmail.com>
2024-09-24 14:12:50 -07:00
Debanjum Singh Solanky
0c936cecc0 Release Khoj version 1.23.3 2024-09-24 12:44:09 -07:00
Debanjum Singh Solanky
61c6e742d5 Truncate chat context to max tokens for offline, openai chat actors too 2024-09-24 12:42:32 -07:00
sabaimran
e306e6ca94 Fix file paths used for pypi wheel building 2024-09-22 12:42:08 -07:00
Debanjum
f00e0e6080 Improve Khoj First Run, Docker Setup and Documentation (#919)
## Improve
- Intelligently initialize a decent default set of chat model options
- Create non-interactive mode. Auto set default server configuration on first run via Docker

## Fix
- Make RapidOCR dependency optional as flaky requirements causing docker build failures
- Set default openai text to image model correctly during initialization

## Details
Improve initialization flow during first run to remove need to configure Khoj:

- Set Google, Anthropic Chat models too
  Previously only Offline, Openai chat models could be set during init

- Add multiple chat models for each LLM provider
  Interactively set a comma separated list of models for each provider

- Auto add default chat models for each provider in non-interactive
  model if the `{OPENAI,GEMINI,ANTHROPIC}_API_KEY' env var is set
  - Used when server run via Docker as user input cannot be processed to configure server during first run

- Do not ask for `max_tokens', `tokenizer' for offline models during
  initialization. Use better defaults inferred in code instead

- Explicitly set default chat model to use
  If unset, it implicitly defaults to using the first chat model.
  Make it explicit to reduce this confusion

Resolves #882
2024-09-21 14:15:45 -07:00
Debanjum Singh Solanky
a6c0b43539 Upgrade documentation package dependencies 2024-09-21 14:06:40 -07:00
Debanjum Singh Solanky
2033f5168e Modularize chat models initialization with a reusable function
The chat model initialize interaction flow is fairly similar across
the chat model providers.

This should simplify adding new chat model providers and reduce
chances of bugs in the interactive chat model initialization flow.
2024-09-21 14:06:40 -07:00
Debanjum Singh Solanky
26c39576df Add Documentation for the settings on the Khoj Admin Panel
This is an initial pass to add documentation for all the knobs
available on the Khoj Admin panel.

It should shed some light onto what each admin setting is for and how
they can be customized when self hosting.

Resolves #831
2024-09-21 14:06:40 -07:00
Debanjum Singh Solanky
730e5608bb Improve Self Hosting Docs. Better Docker, Remote Access Setup Instructions
- Improve Self Hosting Docker Instructions
  - Ask to Install Docker Desktop to not require separate
    docker-compose install and unify the instruction across OS
  - To Self Host on Windows, ask to use Docker Desktop with WSL2 backend

- Use nested Tab grouping to split Docker vs Pip Self Host Instructions

- Reduce Self Host Setup Steps in Documentation after code simplification
  - First run now avoids need to configure Khoj via admin panel
  - So move the chat model config steps into optional post setup
    config section

- Improve Instructions to Configure chat models on First Run

- Compress configuring chat model providers into a Tab Group

- Add Documentation for Remote Access under Advanced Self Hosting
2024-09-21 14:06:17 -07:00
Debanjum Singh Solanky
91c76d4152 Intelligently initialize a decent default set of chat model options
Given the LLM landscape is rapidly changing, providing a good default
set of options should help reduce decision fatigue to get started

Improve initialization flow during first run
- Set Google, Anthropic Chat models too
  Previously only Offline, Openai chat models could be set during init

- Add multiple chat models for each LLM provider
  Interactively set a comma separated list of models for each provider

- Auto add default chat models for each provider in non-interactive
  model if the {OPENAI,GEMINI,ANTHROPIC}_API_KEY env var is set

- Do not ask for max_tokens, tokenizer for offline models during
  initialization. Use better defaults inferred in code instead

- Explicitly set default chat model to use
  If unset, it implicitly defaults to using the first chat model.
  Make it explicit to reduce this confusion

Resolves #882
2024-09-19 20:32:08 -07:00
Debanjum Singh Solanky
f177723711 Add default server configuration on first run in non-interactive mode
This should configure Khoj with decent default configurations via
Docker and avoid needing to configure Khoj via admin page to start
using dockerized Khoj

Update default max prompt size set during khoj initialization
as online chat model are cheaper and offline chat models have larger
context now
2024-09-19 15:12:55 -07:00
Debanjum Singh Solanky
020167c7cf Set default openai text to image model correctly during initialization
Speech to text model was previously being set to the text to image
model previously!
2024-09-19 15:11:34 -07:00
Debanjum Singh Solanky
077b88bafa Make RapidOCR dependency optional as flaky requirements
RapidOCR depends on OpenCV which by default requires a bunch of GUI
paramters. This system package dependency set (like libgl1) is flaky

Making the RapidOCR dependency optional should allow khoj to be more
resilient to setup/dependency failures

Trade-off is that OCR for documents may not always be available and
it'll require looking at server logs to find out when this happens
2024-09-19 15:10:31 -07:00
sabaimran
0a568244fd Revert "Convert conversationId int to string before making api request to bulk update file filters"
This reverts commit c9665fb20b.

Revert "Fix handling for new conversation in agents page"

This reverts commit 3466f04992.

Revert "Add a unique_id field for identifiying conversations (#914)"

This reverts commit ece2ec2d90.
2024-09-18 20:36:57 -07:00
Debanjum Singh Solanky
bb2bd77a64 Send chat message to Khoj web app via url query param
- This allows triggering khoj chat from the browser addressbar
- So now if you add Khoj to your browser bookmark with
  - URL: https://app.khoj.dev/?q=%s
  - Keyword: khoj

- Then you can type "khoj what is the news today" to trigger Khoj to
  quickly respond to your query. This avoids having to open the Khoj web
  app before asking your question
2024-09-17 21:50:47 -07:00
Debanjum Singh Solanky
ecdbcd815e Simplify code to remove json codeblock from AI response string 2024-09-17 21:50:47 -07:00
sabaimran
e457720e8a Improve the email templates and better align with new branding 2024-09-17 11:18:25 -07:00
sabaimran
c9665fb20b Convert conversationId int to string before making api request to bulk update file filters 2024-09-16 15:45:23 -07:00
sabaimran
3466f04992 Fix handling for new conversation in agents page 2024-09-16 15:04:49 -07:00
sabaimran
ece2ec2d90 Add a unique_id field for identifiying conversations (#914)
* Add a unique_id field to the conversation object

- This helps us keep track of the unique identity of the conversation without expose the internal id
- Create three staged migrations in order to first add the field, then add unique values to pre-fill, and then set the unique constraint. Without this, it tries to initialize all the existing conversations with the same ID.

* Parse and utilize the unique_id field in the query parameters of the front-end view

- Handle the unique_id field when creating a new conversation from the home page
- Parse the id field with a lightweight parameter called v in the chat page
- Share page should not be affected, as it uses the public slug

* Fix suggested card category
2024-09-16 12:19:16 -07:00
sabaimran
e6bc7a2ba2 Fix links to log in email templates 2024-09-15 19:14:19 -07:00
Debanjum Singh Solanky
79980feb7b Release Khoj version 1.23.2 2024-09-15 03:07:26 -07:00
Debanjum Singh Solanky
575ff103cf Frame chat response error on web app in a more conversational form
Also indicate hitting dislike on the message should be enough to
convey the issue to the developers.
2024-09-15 03:00:49 -07:00
Debanjum Singh Solanky
893ae60a6a Improve handling of harmful categorized responses by Gemini
Previously Khoj would stop in the middle of response generation when
the safety filters got triggered at default thresholds. This was
confusing as it felt like a service error, not expected behavior.

Going forward Khoj will
- Only block responding to high confidence harmful content detected by
  Gemini's safety filters instead of using the default safety settings
- Show an explanatory, conversational response (w/ harm category)
  when response is terminated due to Gemini's safety filters
2024-09-15 02:17:54 -07:00
sabaimran
ec1f87a896 Release Khoj version 1.23.1 2024-09-12 22:46:39 -07:00
sabaimran
2a4416d223 Use prefetch_related for the openai_config when retrieving all chatmodeloptions async 2024-09-12 22:45:43 -07:00
sabaimran
253ca92203 Release Khoj version 1.23.0 2024-09-12 20:25:29 -07:00
Debanjum Singh Solanky
178b78f87b Show debug log, not warning when use default tokenizer for context stuffing 2024-09-12 20:21:01 -07:00
Debanjum
f173188dcf Support using image generation models like Flux via Replicate (#909)
- Support using image generation models like Flux via Replicate
- Modularize the image generation code
- Make generate better image prompt chat actor add composition details
- Generate vivid images with DALLE-3
2024-09-12 20:19:46 -07:00
Debanjum Singh Solanky
75d3b34452 Extract image generation code into new image processor for modularity 2024-09-12 20:01:32 -07:00
Debanjum Singh Solanky
84051d7d89 Make generate better image prompt chat actor add composition details 2024-09-12 19:58:57 -07:00
Debanjum Singh Solanky
ed12f45a26 Generate vivid images with DALLE-3
It's apparently the default setting in chatgpt app according to the
openai cookbook at https://cookbook.openai.com/articles/what_is_new_with_dalle_3#examples-and-prompts
2024-09-12 19:58:57 -07:00
Debanjum Singh Solanky
1b82aea753 Support using image generation models like Flux via Replicate
Enables using any image generation model on Replicate's Predictions
API endpoints.

The server admin just needs to add text-to-image model on the
server/admin panel in organization/model_name format and input their
Replicate API key with it

Create db migration (including merge)
2024-09-12 19:58:56 -07:00
Brian Kanya
1d512b4986 Use environment variable to set sender email of auth link emails (#907)
Set sender email using `RESEND_EMAIL` environment variable for magic link sent via Resend API for authentication . It was previously hard-coded. This prevented hosting Khoj on other domains. 

Resolves #908
2024-09-12 18:48:11 -07:00
Debanjum
26ca3df605 Support OpenAI's new O1 Model Series (#912)
- Major
   - The new O1 series doesn't seem to support streaming, response_format enforcement, 
      stop words or temperature currently. 
   - Remove any markdown json codeblock in chat actors expecting json responses

- Minor
   - Override block display styling of links by Katex in chat messages
2024-09-12 18:42:51 -07:00
Debanjum Singh Solanky
0685a79748 Remove any markdown json codeblock in chat actors expecting json responses
Strip any json md codeblock wrapper if exists before processing
response by output mode, extract questions chat actor. This is similar
to what is already being done by other chat actors

Useful for succesfully interpreting json output in chat actors when
using non (json) schema enforceable models like o1 and gemma-2

Use conversation helper function to centralize the json md codeblock
removal code
2024-09-12 18:26:15 -07:00
Debanjum Singh Solanky
6e660d11c9 Override block display styling of links by Katex in chat messages
This happens sometimes when LLM respons contains [\[1\]] kind of links
as reference. Both markdown-it and katex apply styling.

Katex's span uses display: block which makes the rendering of these
references take up a whole line by themselves.

Override block styling of spans within an `a' element to prevent such
chat message styling issues
2024-09-12 18:22:46 -07:00
Debanjum Singh Solanky
272eae5d66 Add support for the newly released OpenAI O1 model series for preview
The O1 series doesn't seem to support streaming, stop words or
temperature, response_format currently.
2024-09-12 18:22:46 -07:00
Alexander Matyasko
9570933506 Support Google's Gemini model series (#902)
* Add functions to chat with Google's gemini model series
  * Gracefully close thread when there's an exception in the gemini llm thread
* Use enums for verifying the chat model option type
* Add a migration to add the gemini chat model type to the db model
* Fix chat model selection verification and math prompt tuning
* Fix extract questions method with gemini. Enforce json response in extract questions.
* Add standard stop sequence for Gemini chat response generation

---------

Co-authored-by: sabaimran <narmiabas@gmail.com>
Co-authored-by: Debanjum Singh Solanky <debanjum@gmail.com>
2024-09-12 18:17:55 -07:00
Debanjum Singh Solanky
42b727e926 Revert additional logging enabled to debug automation failures in prod
Additional logging was enabled to debug automation failures in
production since migration chat API to use POST request method (from
earlier GET).

Redirect from http to https was default to use GET instead of POST
method to call /api/chat on redirect. This has been resolved now
2024-09-12 17:56:54 -07:00
sabaimran
14a495cbb5 Release Khoj version 1.22.3 2024-09-12 12:39:04 -07:00
sabaimran
91cee2eaa8 Handle redirects when scheduling chats from automations 2024-09-12 11:36:47 -07:00
sabaimran
4555969d38 Add additional log lines 2024-09-12 10:50:36 -07:00
sabaimran
12897a9a62 Update link to gif demo in README to pull from GitHub 2024-09-11 20:09:26 -07:00
sabaimran
9310f88537 Add quadratic equation gif to docs 2024-09-11 20:07:57 -07:00
sabaimran
d042a055cf Update the demo and simplify the readme 2024-09-11 20:03:48 -07:00
sabaimran
4d3224657f Update the documentation with swanky new demo videos 2024-09-11 19:57:10 -07:00
352 changed files with 33551 additions and 16542 deletions

View File

@@ -1,10 +1,11 @@
.git/
.pytest_cache/
.vscode/
.venv/
docs/
.*
**/__pycache__/
*.egg-info/
documentation/
tests/
build/
dist/
scripts/
*.egg-info/
src/interface/
src/telemetry/
!src/interface/web

View File

@@ -1,42 +0,0 @@
---
name: Bug Report
about: Create a bug to help fix something that might not be working correctly
title: "[FIX]"
labels: fix
assignees: ''
---
## Describe the bug
A clear and concise description of what the bug is. Please include what you were expecting to happen vs. what actually happened.
## To Reproduce
Steps to reproduce the behavior:
## Screenshots
If applicable, add screenshots to help explain your problem.
## Platform
- Server:
- [ ] Cloud-Hosted (https://app.khoj.dev)
- [ ] Self-Hosted Docker
- [ ] Self-Hosted Python package
- [ ] Self-Hosted source code
- Client:
- [ ] Obsidian
- [ ] Emacs
- [ ] Desktop app
- [ ] Web browser
- [ ] WhatsApp
- OS:
- [ ] Windows
- [ ] macOS
- [ ] Linux
- [ ] Android
- [ ] iOS
### If self-hosted
- Server Version [e.g. 1.0.1]:
## Additional context
Add any other context about the problem here.

144
.github/ISSUE_TEMPLATE/bug-report.yml vendored Normal file
View File

@@ -0,0 +1,144 @@
name: "🪲 Bug Report"
description: "Use this template to report a bug."
labels: ["fix"]
body:
- type: "markdown"
attributes:
value: |
> [!IMPORTANT]
> To save time for both you and us, try follow these guidelines before submitting a new issue:
> 1. Check if there is an existing issue tracking your bug on our Github.
> 2. When unsure if your issue is an actual bug, first discuss it on a [Github discussion](https://github.com/khoj-ai/khoj/discussions/new?category=q-a) or the **#bugs** channel of our [Discord server](https://discord.gg/b6gUdpKr).
> These steps avoid opening issues which are duplicate or not actual bugs.
- type: "checkboxes"
id: "server"
attributes:
label: "Server"
description: "With which Khoj server are you experiencing the issue? (select at least one)"
options:
- label: "Cloud (https://app.khoj.dev)"
required: false
- label: "Self-Hosted Docker"
required: false
- label: "Self-Hosted Python package"
required: false
- label: "Self-Hosted source code"
required: false
validations:
required: true
- type: "checkboxes"
id: "clients"
attributes:
label: "Clients"
description: "With which Khoj client(s) are you experiencing the issue? (select at least one)"
options:
- label: "Web browser"
required: false
- label: "Desktop/mobile app"
required: false
- label: "Obsidian"
required: false
- label: "Emacs"
required: false
- label: "WhatsApp"
required: false
validations:
required: true
- type: "checkboxes"
id: "os"
attributes:
label: "OS"
description: "On which operating system do you experience the issue? (select at least one)"
options:
- label: "Windows"
required: false
- label: "macOS"
required: false
- label: "Linux"
required: false
- label: "Android"
required: false
- label: "iOS"
required: false
validations:
required: true
- type: "input"
id: "version"
attributes:
label: "Khoj version"
description: "Use `/help` command on the chat page to find the server version.\n
If using the cloud service, you can input, latest.\n
If self-hosting - please make sure to run the latest version of Khoj before reporting any issues as it may have already been fixed."
validations:
required: true
- type: "textarea"
id: "description"
attributes:
label: "Describe the bug"
description: "What is the problem? A clear and concise description of the bug."
validations:
required: true
- type: "textarea"
id: "current"
attributes:
label: "Current Behavior"
description: "What actually happened?\n\n
Please include full errors, uncaught exceptions, stack traces, screenshots and other relevant logs."
validations:
required: true
- type: "textarea"
id: "expected"
attributes:
label: "Expected Behavior"
description: "What did you expect to happen?"
validations:
required: true
- type: "textarea"
id: "reproduction"
attributes:
label: "Reproduction Steps"
description: "Detail the steps needed to reproduce the issue. This can include a self-contained, concise snippet of
code, if applicable.\n\n
For more complex issues, provide a link to a repository with the smallest sample that reproduces
the bug.\n
If the issue can be replicated without code, please provide a clear, step-by-step description of
the actions or conditions necessary to reproduce it. Any screenshots are also appreciated.\n
Avoid including business logic or unrelated details, as this makes diagnosis more difficult.\n\n
Whether it's a sequence of actions, code samples, or specific conditions, ensure that the steps
are clear enough to be easily followed and replicated."
validations:
required: true
- type: "textarea"
id: "workaround"
attributes:
label: "Possible Workaround"
description: "If you find any workaround for this problem - please, provide it here."
validations:
required: false
- type: "textarea"
id: "context"
attributes:
label: "Additional Information"
description: "Anything else that might be relevant for troubleshooting this bug.\n
Providing context helps us come up with a solution that is most useful in the real-world use case.\n
For example if self-hosting, provide environment details like OS versin, Docker version etc."
validations:
required: false
- type: "input"
id: "discussion_link"
attributes:
label: "Link to Discord or Github discussion"
description: "Provide a link to the first message of bug's discussion on Discord or Github.\n
This will help to keep history of why this bug exists."
validations:
required: false

View File

@@ -42,10 +42,10 @@ jobs:
run: |
npx todesktop build
- name: 📦 Release Desktop App
if: startsWith(github.ref, 'refs/tags/')
run: |
npx todesktop release --latest --force
# - name: 📦 Release Desktop App
# if: startsWith(github.ref, 'refs/tags/')
# run: |
# npx todesktop release --latest --force
- name: ⤵️ Get Desktop Apps
if: startsWith(github.ref, 'refs/tags/')
@@ -60,7 +60,7 @@ jobs:
- name: ⏫ Upload Mac ARM App
if: startsWith(github.ref, 'refs/tags/')
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
if-no-files-found: warn
name: khoj-${{ github.ref_name }}-arm64.dmg
@@ -68,7 +68,7 @@ jobs:
- name: ⏫ Upload Mac x64 App
if: startsWith(github.ref, 'refs/tags/')
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
if-no-files-found: warn
name: khoj-${{ github.ref_name }}-x64.dmg
@@ -76,7 +76,7 @@ jobs:
- name: ⏫ Upload Windows App
if: startsWith(github.ref, 'refs/tags/')
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
if-no-files-found: warn
name: khoj-${{ github.ref_name }}-x64.exe
@@ -84,7 +84,7 @@ jobs:
- name: ⏫ Upload Debian App
if: startsWith(github.ref, 'refs/tags/')
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
if-no-files-found: warn
name: khoj-${{ github.ref_name }}-x64.deb
@@ -92,7 +92,7 @@ jobs:
- name: ⏫ Upload Linux App Image
if: startsWith(github.ref, 'refs/tags/')
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
if-no-files-found: warn
name: khoj-${{ github.ref_name }}-x64.AppImage

View File

@@ -38,13 +38,23 @@ env:
jobs:
build:
name: Publish Khoj Docker Images
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
image:
- 'local'
- 'cloud'
include:
- image: 'local'
platform: linux/amd64
runner: ubuntu-latest
- image: 'local'
platform: linux/arm64
runner: ubuntu-linux-arm64
- image: 'cloud'
platform: linux/amd64
runner: ubuntu-latest
- image: 'cloud'
platform: linux/arm64
runner: ubuntu-linux-arm64
runs-on: ${{ matrix.runner }}
steps:
- name: Checkout Code
uses: actions/checkout@v3
@@ -70,31 +80,70 @@ jobs:
run: rm -rf /opt/hostedtoolcache
- name: 📦 Build and Push Docker Image
uses: docker/build-push-action@v2
uses: docker/build-push-action@v4
if: (matrix.image == 'local' && github.event_name == 'workflow_dispatch') && github.event.inputs.khoj == 'true' || (matrix.image == 'local' && github.event_name == 'push')
with:
context: .
file: Dockerfile
platforms: linux/amd64, linux/arm64
push: true
tags: |
ghcr.io/${{ github.repository }}:${{ env.DOCKER_IMAGE_TAG }}
${{ github.ref_type == 'tag' && format('ghcr.io/{0}:latest', github.repository) || '' }}
ghcr.io/${{ github.repository }}:${{ env.DOCKER_IMAGE_TAG }}-${{ matrix.platform == 'linux/amd64' && 'amd64' || 'arm64' }}
build-args: |
VERSION=${{ steps.hatch.outputs.version }}
PORT=42110
cache-from: type=gha,scope=${{ matrix.image }}-${{ matrix.platform }}
cache-to: type=gha,mode=max,scope=${{ matrix.image }}-${{ matrix.platform}}
labels: |
org.opencontainers.image.description=Khoj AI - Your second brain powered by LLMs and Neural Search
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
- name: 📦️⛅️ Build and Push Cloud Docker Image
uses: docker/build-push-action@v2
uses: docker/build-push-action@v4
if: (matrix.image == 'cloud' && github.event_name == 'workflow_dispatch') && github.event.inputs.khoj-cloud == 'true' || (matrix.image == 'cloud' && github.event_name == 'push')
with:
context: .
file: prod.Dockerfile
platforms: linux/amd64, linux/arm64
push: true
tags: |
ghcr.io/${{ github.repository }}-cloud:${{ env.DOCKER_IMAGE_TAG }}
${{ github.ref_type == 'tag' && format('ghcr.io/{0}-cloud:latest', github.repository) || '' }}
ghcr.io/${{ github.repository }}-cloud:${{ env.DOCKER_IMAGE_TAG }}-${{ matrix.platform == 'linux/amd64' && 'amd64' || 'arm64' }}
build-args: |
VERSION=${{ steps.hatch.outputs.version }}
PORT=42110
cache-from: type=gha,scope=${{ matrix.image }}-${{ matrix.platform }}
cache-to: type=gha,mode=max,scope=${{ matrix.image }}-${{ matrix.platform}}
labels: |
org.opencontainers.image.description=Khoj AI Cloud - Your second brain powered by LLMs and Neural Search
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
manifest:
needs: build
runs-on: ubuntu-latest
if: github.event_name != 'pull_request'
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.PAT }}
- name: Create and Push Local Manifest
if: github.event.inputs.khoj == 'true' || github.event_name == 'push'
run: |
docker buildx imagetools create \
-t ghcr.io/${{ github.repository }}:${{ env.DOCKER_IMAGE_TAG }} \
-t ghcr.io/${{ github.repository }}:${{ github.ref_type == 'tag' && 'latest' || env.DOCKER_IMAGE_TAG }} \
ghcr.io/${{ github.repository }}:${{ env.DOCKER_IMAGE_TAG }}-amd64 \
ghcr.io/${{ github.repository }}:${{ env.DOCKER_IMAGE_TAG }}-arm64
- name: Create and Push Cloud Manifest
if: github.event.inputs.khoj-cloud == 'true' || github.event_name == 'push'
run: |
docker buildx imagetools create \
-t ghcr.io/${{ github.repository }}-cloud:${{ env.DOCKER_IMAGE_TAG }} \
-t ghcr.io/${{ github.repository }}-cloud:${{ github.ref_type == 'tag' && 'latest' || env.DOCKER_IMAGE_TAG }} \
ghcr.io/${{ github.repository }}-cloud:${{ env.DOCKER_IMAGE_TAG }}-amd64 \
ghcr.io/${{ github.repository }}-cloud:${{ env.DOCKER_IMAGE_TAG }}-arm64

View File

@@ -1,4 +1,4 @@
name: build and deploy github pages for documentation
name: deploy documentation
on:
push:
branches:
@@ -17,10 +17,10 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
# 👇 Build steps
- name: Set up Node.js
uses: actions/setup-node@v3
uses: actions/setup-node@v4
with:
node-version: 18.x
cache: yarn
@@ -35,12 +35,12 @@ jobs:
yarn build
# 👆 Build steps
- name: Setup Pages
uses: actions/configure-pages@v3
uses: actions/configure-pages@v5
- name: Upload artifact
uses: actions/upload-pages-artifact@v2
uses: actions/upload-pages-artifact@v3
with:
# 👇 Specify build output path
path: documentation/build
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v2
uses: actions/deploy-pages@v4

View File

@@ -49,7 +49,7 @@ jobs:
- name: 📂 Copy Generated Files
run: |
mkdir -p src/khoj/interface/compiled
cp -r /opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/khoj/interface/compiled/* src/khoj/interface/compiled/
cp -r /opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/khoj/interface/compiled/* src/khoj/interface/compiled/
- name: ⚙️ Build Python Package
run: |
@@ -75,6 +75,7 @@ jobs:
- name: 📦 Publish Python Package to PyPI
if: startsWith(github.ref, 'refs/tags') || github.ref == 'refs/heads/master'
uses: pypa/gh-action-pypi-publish@v1.8.14
uses: pypa/gh-action-pypi-publish@release/v1.12
with:
skip-existing: true
print-hash: true

View File

@@ -35,21 +35,21 @@ jobs:
yarn run build --if-present
- name: ⏫ Upload Obsidian Plugin main.js
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
if-no-files-found: error
name: main.js
path: src/interface/obsidian/main.js
- name: ⏫ Upload Obsidian Plugin manifest.json
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
if-no-files-found: error
name: manifest.json
path: src/interface/obsidian/manifest.json
- name: ⏫ Upload Obsidian Plugin styles.css
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
if-no-files-found: error
name: styles.css

181
.github/workflows/run_evals.yml vendored Normal file
View File

@@ -0,0 +1,181 @@
name: eval
on:
# Run on every release
push:
tags:
- "*"
# Allow manual triggers from GitHub UI
workflow_dispatch:
inputs:
khoj_mode:
description: 'Khoj Mode (general/default/research)'
required: true
default: 'default'
type: choice
options:
- general
- default
- research
dataset:
description: 'Dataset to evaluate (frames/simpleqa)'
required: true
default: 'frames'
type: choice
options:
- frames
- simpleqa
- gpqa
- math500
sample_size:
description: 'Number of samples to evaluate'
required: false
default: 200
type: number
sandbox:
description: 'Code sandbox to use'
required: false
default: 'terrarium'
type: choice
options:
- terrarium
- e2b
chat_model:
description: 'Chat model to use'
required: false
default: 'gemini-2.0-flash'
type: string
max_research_iterations:
description: 'Maximum number of iterations in research mode'
required: false
default: 5
type: number
jobs:
eval:
runs-on: ubuntu-latest
strategy:
matrix:
# Use input from manual trigger if available, else run all combinations
khoj_mode: ${{ github.event_name == 'workflow_dispatch' && fromJSON(format('["{0}"]', inputs.khoj_mode)) || fromJSON('["general", "default", "research"]') }}
dataset: ${{ github.event_name == 'workflow_dispatch' && fromJSON(format('["{0}"]', inputs.dataset)) || fromJSON('["frames", "gpqa"]') }}
services:
postgres:
image: ankane/pgvector
env:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: postgres
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Get App Version
id: hatch
run: echo "version=$(pipx run hatch version)" >> $GITHUB_OUTPUT
- name: ⏬️ Install Dependencies
env:
DEBIAN_FRONTEND: noninteractive
run: |
# install postgres and other dependencies
sudo apt update && sudo apt install -y git python3-pip libegl1 sqlite3 libsqlite3-dev libsqlite3-0 ffmpeg libsm6 libxext6
sudo apt install -y postgresql postgresql-client && sudo apt install -y postgresql-server-dev-16
# upgrade pip
python -m ensurepip --upgrade && python -m pip install --upgrade pip
# install terrarium for code sandbox
git clone https://github.com/khoj-ai/terrarium.git && cd terrarium && npm install --legacy-peer-deps && mkdir pyodide_cache
- name: ⬇️ Install Application
run: |
sed -i 's/dynamic = \["version"\]/version = "${{ steps.hatch.outputs.version }}"/' pyproject.toml
pip install --upgrade .[dev]
- name: 📝 Run Eval
env:
KHOJ_MODE: ${{ matrix.khoj_mode }}
SAMPLE_SIZE: ${{ github.event_name == 'workflow_dispatch' && inputs.sample_size || 200 }}
BATCH_SIZE: "20"
RANDOMIZE: "True"
KHOJ_URL: "http://localhost:42110"
KHOJ_DEFAULT_CHAT_MODEL: ${{ github.event_name == 'workflow_dispatch' && inputs.chat_model || 'gemini-2.0-flash' }}
KHOJ_LLM_SEED: "42"
KHOJ_RESEARCH_ITERATIONS: ${{ github.event_name == 'workflow_dispatch' && inputs.max_research_iterations || 5 }}
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
SERPER_DEV_API_KEY: ${{ matrix.dataset != 'math500' && secrets.SERPER_DEV_API_KEY || '' }}
OLOSTEP_API_KEY: ${{ matrix.dataset != 'math500' && secrets.OLOSTEP_API_KEY || ''}}
HF_TOKEN: ${{ secrets.HF_TOKEN }}
E2B_API_KEY: ${{ inputs.sandbox == 'e2b' && secrets.E2B_API_KEY || '' }}
E2B_TEMPLATE: ${{ vars.E2B_TEMPLATE }}
KHOJ_ADMIN_EMAIL: khoj
KHOJ_ADMIN_PASSWORD: khoj
POSTGRES_HOST: localhost
POSTGRES_PORT: 5432
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: postgres
KHOJ_TELEMETRY_DISABLE: "True" # To disable telemetry for tests
run: |
# Start Khoj server in background
khoj --anonymous-mode --non-interactive &
# Start code sandbox
npm install -g pm2
NODE_ENV=production npm run ci --prefix terrarium
# Wait for server to be ready
timeout=120
while ! curl -s http://localhost:42110/api/health > /dev/null; do
if [ $timeout -le 0 ]; then
echo "Timed out waiting for Khoj server"
exit 1
fi
echo "Waiting for Khoj server..."
sleep 2
timeout=$((timeout-2))
done
# Run evals
python tests/evals/eval.py -d ${{ matrix.dataset }}
- name: Upload Results
if: always() # Upload results even if tests fail
uses: actions/upload-artifact@v4
with:
name: eval-results-${{ steps.hatch.outputs.version }}-${{ matrix.khoj_mode }}-${{ matrix.dataset }}
path: |
*_evaluation_results_*.csv
*_evaluation_summary_*.txt
- name: Display Results
if: always()
run: |
# Read and display summary
echo "## Evaluation Summary of Khoj on ${{ matrix.dataset }} in ${{ matrix.khoj_mode }} mode" >> $GITHUB_STEP_SUMMARY
echo "**$(head -n 1 *_evaluation_summary_*.txt)**" >> $GITHUB_STEP_SUMMARY
echo "- Khoj Version: ${{ steps.hatch.outputs.version }}" >> $GITHUB_STEP_SUMMARY
echo "- Chat Model: ${{ inputs.chat_model || 'gemini-2.0-flash' }}" >> $GITHUB_STEP_SUMMARY
echo "- Code Sandbox: ${{ inputs.sandbox || 'terrarium' }}" >> $GITHUB_STEP_SUMMARY
echo "\`\`\`" >> $GITHUB_STEP_SUMMARY
tail -n +2 *_evaluation_summary_*.txt >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "\`\`\`" >> $GITHUB_STEP_SUMMARY
# Display in logs too
echo "===== EVALUATION RESULTS ====="
cat *_evaluation_summary_*.txt

View File

@@ -5,6 +5,7 @@ on:
paths:
- src/khoj/**
- tests/**
- '!tests/evals/**'
- config/**
- pyproject.toml
- .pre-commit-config.yml
@@ -15,6 +16,7 @@ on:
paths:
- src/khoj/**
- tests/**
- '!tests/evals/**'
- config/**
- pyproject.toml
- .pre-commit-config.yml
@@ -24,7 +26,7 @@ jobs:
test:
name: Run Tests
runs-on: ubuntu-latest
container: ubuntu:jammy
container: ubuntu:latest
strategy:
fail-fast: false
matrix:
@@ -53,30 +55,31 @@ jobs:
with:
python-version: ${{ matrix.python_version }}
- name: Install Git
run: |
apt update && apt install -y git
- name: ⏬️ Install Dependencies
env:
DEBIAN_FRONTEND: noninteractive
run: |
apt update && apt install -y libegl1 sqlite3 libsqlite3-dev libsqlite3-0 ffmpeg libsm6 libxext6
apt update && apt install -y git libegl1 sqlite3 libsqlite3-dev libsqlite3-0 ffmpeg libsm6 libxext6
# required by llama-cpp-python prebuilt wheels
apt install -y musl-dev && ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1
- name: ⬇️ Install Postgres
env:
DEBIAN_FRONTEND: noninteractive
run : |
apt install -y postgresql postgresql-client && apt install -y postgresql-server-dev-14
apt install -y postgresql postgresql-client && apt install -y postgresql-server-dev-16
- name: ⬇️ Install pip
run: |
apt install -y python3-pip
python -m ensurepip --upgrade
python -m pip install --upgrade pip
python3 -m ensurepip --upgrade
python3 -m pip install --upgrade pip
- name: ⬇️ Install Application
run: sed -i 's/dynamic = \["version"\]/version = "0.0.0"/' pyproject.toml && pip install --upgrade .[dev]
env:
PIP_EXTRA_INDEX_URL: "https://download.pytorch.org/whl/cpu https://abetlen.github.io/llama-cpp-python/whl/cpu"
CUDA_VISIBLE_DEVICES: ""
run: sed -i 's/dynamic = \["version"\]/version = "0.0.0"/' pyproject.toml && pip install --break-system-packages --upgrade .[dev]
- name: 🧪 Test Application
env:

12
.gitignore vendored
View File

@@ -3,6 +3,8 @@
*.pt
tests/data/models
tests/data/embeddings
tests/evals/**.txt
tests/evals/**.csv
# External app artifacts
__pycache__
@@ -42,3 +44,13 @@ src/interface/obsidian/main.js
# obsidian
data.json
# Android
src/interface/android/.gradle
src/interface/android/app/build
src/interface/android/build
src/interface/android/*.aab
src/interface/android/*.apk
src/interface/android/*.apk.idsig
src/interface/android/*.keystore
src/interface/android/local.properties

View File

@@ -26,7 +26,7 @@ repos:
rev: v1.0.0
hooks:
- id: mypy
stages: [push, manual]
stages: [pre-push, manual]
pass_filenames: false
args:
- --config-file=pyproject.toml

View File

@@ -1,44 +1,63 @@
# syntax=docker/dockerfile:1
FROM ubuntu:jammy
FROM ubuntu:jammy AS base
LABEL homepage="https://khoj.dev"
LABEL repository="https://github.com/khoj-ai/khoj"
LABEL org.opencontainers.image.source="https://github.com/khoj-ai/khoj"
LABEL org.opencontainers.image.description="Your second brain, containerized for personal, local deployment."
# Install System Dependencies
RUN apt update -y && apt -y install python3-pip swig curl
RUN apt update -y && apt -y install \
python3-pip \
swig \
curl \
# Required by RapidOCR
libgl1 \
libglx-mesa0 \
libglib2.0-0 \
# Required by llama-cpp-python pre-built wheels. See #1628
musl-dev && \
ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1 && \
# Clean up
apt clean && rm -rf /var/lib/apt/lists/*
# Install Node.js and Yarn
RUN curl -sL https://deb.nodesource.com/setup_20.x | bash -
RUN apt -y install nodejs
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt update && apt -y install yarn
# Install RapidOCR dependencies
RUN apt -y install libgl1 libgl1-mesa-glx libglib2.0-0
# Install Application
# Build Server
FROM base AS server-deps
WORKDIR /app
COPY pyproject.toml .
COPY README.md .
ARG VERSION=0.0.0
# use the pre-built llama-cpp-python, torch cpu wheel
ENV PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cpu https://abetlen.github.io/llama-cpp-python/whl/cpu"
# avoid downloading unused cuda specific python packages
ENV CUDA_VISIBLE_DEVICES=""
RUN sed -i "s/dynamic = \\[\"version\"\\]/version = \"$VERSION\"/" pyproject.toml && \
pip install --no-cache-dir .
# Copy Source Code
COPY . .
# Set the PYTHONPATH environment variable in order for it to find the Django app.
ENV PYTHONPATH=/app/src:$PYTHONPATH
# Go to the directory src/interface/web and export the built Next.js assets
# Build Web App
FROM node:20-alpine AS web-app
# Set build optimization env vars
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
WORKDIR /app/src/interface/web
RUN bash -c "yarn cache clean && yarn install --verbose && yarn ciexport"
# Install dependencies first (cache layer)
COPY src/interface/web/package.json src/interface/web/yarn.lock ./
RUN yarn install --frozen-lockfile
# Copy source and build
COPY src/interface/web/. ./
RUN yarn build
# Merge the Server and Web App into a Single Image
FROM base
ENV PYTHONPATH=/app/src
WORKDIR /app
COPY --from=server-deps /usr/local/lib/python3.10/dist-packages /usr/local/lib/python3.10/dist-packages
COPY --from=web-app /app/src/interface/web/out ./src/khoj/interface/built
COPY . .
RUN cd src && python3 khoj/manage.py collectstatic --noinput
# Run the Application
# There are more arguments required for the application to run,
# but these should be passed in through the docker-compose.yml file.
ARG PORT
# but those should be passed in through the docker-compose.yml file.
ARG PORT=42110
EXPOSE ${PORT}
ENTRYPOINT ["python3", "src/khoj/main.py"]

View File

@@ -1,52 +1,62 @@
<p align="center"><img src="src/khoj/interface/web/assets/icons/khoj-logo-sideways-500.png" width="230" alt="Khoj Logo"></p>
<p align="center"><img src="https://assets.khoj.dev/khoj-logo-sideways-1200x540.png" width="230" alt="Khoj Logo"></p>
<div align="center">
[![test](https://github.com/khoj-ai/khoj/actions/workflows/test.yml/badge.svg)](https://github.com/khoj-ai/khoj/actions/workflows/test.yml)
[![dockerize](https://github.com/khoj-ai/khoj/actions/workflows/dockerize.yml/badge.svg)](https://github.com/khoj-ai/khoj/pkgs/container/khoj)
[![docker](https://github.com/khoj-ai/khoj/actions/workflows/dockerize.yml/badge.svg)](https://github.com/khoj-ai/khoj/pkgs/container/khoj)
[![pypi](https://github.com/khoj-ai/khoj/actions/workflows/pypi.yml/badge.svg)](https://pypi.org/project/khoj/)
![Discord](https://img.shields.io/discord/1112065956647284756?style=plastic&label=discord)
[![discord](https://img.shields.io/discord/1112065956647284756?style=plastic&label=discord)](https://discord.gg/BDgyabRM6e)
</div>
<div align="center">
<b>The open-source, personal AI for your digital brain</b>
<b>Your AI second brain</b>
</div>
<br />
<div align="center">
[🤖 Read Docs](https://docs.khoj.dev)
[📑 Docs](https://docs.khoj.dev)
<span>&nbsp;&nbsp;&nbsp;&nbsp;</span>
[🏮 Khoj Cloud](https://khoj.dev)
[🌐 Web](https://khoj.dev)
<span>&nbsp;&nbsp;&nbsp;&nbsp;</span>
[💬 Get Involved](https://discord.gg/BDgyabRM6e)
[🔥 App](https://app.khoj.dev)
<span>&nbsp;&nbsp;&nbsp;&nbsp;</span>
[📚 Read Blog](https://blog.khoj.dev)
[💬 Discord](https://discord.gg/BDgyabRM6e)
<span>&nbsp;&nbsp;&nbsp;&nbsp;</span>
[✍🏽 Blog](https://blog.khoj.dev)
</div>
<div align="left">
***
### 🎁 New
* Start any message with `/research` to try out the experimental research mode with Khoj.
* Anyone can now [create custom agents](https://blog.khoj.dev/posts/create-agents-on-khoj/) with tunable personality, tools and knowledge bases.
* [Read](https://blog.khoj.dev/posts/evaluate-khoj-quality/) about Khoj's excellent performance on modern retrieval and reasoning benchmarks.
***
Khoj is an application that creates always-available, personal AI agents for you to extend your capabilities.
- You can share your notes and documents to extend your digital brain.
- Your AI agents have access to the internet, allowing you to incorporate realtime information.
- Khoj is accessible on Desktop, Emacs, Obsidian, Web and Whatsapp.
- You can share pdf, markdown, org-mode, notion files and github repositories.
- You'll get fast, accurate semantic search on top of your docs.
- Your agents can create deeply personal images and understand your speech.
## Overview
[Khoj](https://khoj.dev) is a personal AI app to extend your capabilities. It smoothly scales up from an on-device personal AI to a cloud-scale enterprise AI.
- Chat with any local or online LLM (e.g llama3, qwen, gemma, mistral, gpt, claude, gemini).
- Get answers from the internet and your docs (including image, pdf, markdown, org-mode, word, notion files).
- Access it from your Browser, Obsidian, Emacs, Desktop, Phone or Whatsapp.
- Create agents with custom knowledge, persona, chat model and tools to take on any role.
- Automate away repetitive research. Get personal newsletters and smart notifications delivered to your inbox.
- Find relevant docs quickly and easily using our advanced semantic search.
- Generate images, talk out loud, play your messages.
- Khoj is open-source, self-hostable. Always.
- Run it privately on [your computer](https://docs.khoj.dev/get-started/setup) or try it on our [cloud app](https://app.khoj.dev).
***
</div>
## See it in action
<img src="https://github.com/khoj-ai/khoj/blob/master/documentation/assets/img/using_khoj_for_studying.gif?raw=true" alt="Khoj Demo">
![demo_chat](https://github.com/khoj-ai/khoj/blob/master/documentation/assets/img/quadratic_equation_khoj_web.gif?raw=true)
Go to https://app.khoj.dev to see Khoj live.
@@ -57,6 +67,10 @@ You can see the full feature list [here](https://docs.khoj.dev/category/features
To get started with self-hosting Khoj, [read the docs](https://docs.khoj.dev/get-started/setup).
## Enterprise
Khoj is available as a cloud service, on-premises, or as a hybrid solution. To learn more about Khoj Enterprise, [visit our website](https://khoj.dev/teams).
## Contributors
Cheers to our awesome contributors! 🎉
@@ -69,10 +83,3 @@ Made with [contrib.rocks](https://contrib.rocks).
### Interested in Contributing?
We are always looking for contributors to help us build new features, improve the project documentation, or fix bugs. If you're interested, please see our [Contributing Guidelines](https://docs.khoj.dev/contributing/development) and check out our [Contributors Project Board](https://github.com/orgs/khoj-ai/projects/4).
## [Sponsors](https://github.com/sponsors/khoj-ai)
Shout out to our brilliant sponsors! 🌈
<a href="http://github.com/beekeeb">
<img src="https://raw.githubusercontent.com/beekeeb/piantor/main/docs/beekeeb.png" width=250/>
</a>

View File

@@ -1,8 +1,6 @@
services:
database:
image: ankane/pgvector
ports:
- "5432:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
@@ -14,13 +12,26 @@ services:
interval: 30s
timeout: 10s
retries: 5
sandbox:
image: ghcr.io/khoj-ai/terrarium:latest
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 2
search:
image: docker.io/searxng/searxng:latest
volumes:
- khoj_search:/etc/searxng
environment:
- SEARXNG_BASE_URL=http://localhost:8080/
server:
depends_on:
database:
condition: service_healthy
# Use the following line to use the latest version of khoj. Otherwise, it will build from source.
# Use the following line to use the latest version of khoj. Otherwise, it will build from source. Set this to ghcr.io/khoj-ai/khoj-cloud:latest if you want to use the prod image.
image: ghcr.io/khoj-ai/khoj:latest
# Uncomment the following line to build from source. This will take a few minutes. Comment the next two lines out if you want to use the offiicial image.
# Uncomment the following line to build from source. This will take a few minutes. Comment the next two lines out if you want to use the official image.
# build:
# context: .
ports:
@@ -29,10 +40,13 @@ services:
# change the port in the args in the build section,
# as well as the port in the command section to match
- "42110:42110"
extra_hosts:
- "host.docker.internal:host-gateway"
working_dir: /app
volumes:
- khoj_config:/root/.khoj/
- khoj_models:/root/.cache/torch/sentence_transformers
- khoj_models:/root/.cache/huggingface
# Use 0.0.0.0 to explicitly set the host ip for the service on the container. https://pythonspeed.com/articles/docker-connection-refused/
environment:
- POSTGRES_DB=postgres
@@ -44,14 +58,57 @@ services:
- KHOJ_DEBUG=False
- KHOJ_ADMIN_EMAIL=username@example.com
- KHOJ_ADMIN_PASSWORD=password
# Uncomment the following lines to make your instance publicly accessible.
# Replace the domain with your domain. Proceed with caution, especially if you are using anonymous mode.
# Default URL of Terrarium, the default Python sandbox used by Khoj to run code. Its container is specified above
- KHOJ_TERRARIUM_URL=http://sandbox:8080
# Uncomment line below to have Khoj run code in remote E2B code sandbox instead of the self-hosted Terrarium sandbox above. Get your E2B API key from https://e2b.dev/.
# - E2B_API_KEY=your_e2b_api_key
# Default URL of SearxNG, the default web search engine used by Khoj. Its container is specified above
- KHOJ_SEARXNG_URL=http://search:8080
# Uncomment line below to use with Ollama running on your local machine at localhost:11434.
# Change URL to use with other OpenAI API compatible providers like VLLM, LMStudio etc.
# - OPENAI_BASE_URL=http://host.docker.internal:11434/v1/
#
# Uncomment appropriate lines below to use chat models by OpenAI, Anthropic, Google.
# Ensure you set your provider specific API keys.
# ---
# - OPENAI_API_KEY=your_openai_api_key
# - GEMINI_API_KEY=your_gemini_api_key
# - ANTHROPIC_API_KEY=your_anthropic_api_key
#
# Uncomment appropriate lines below to enable web results with Khoj
# Ensure you set your provider specific API keys.
# ---
# Free, Slower API. Does both web search and webpage read. Get API key from https://jina.ai/
# - JINA_API_KEY=your_jina_api_key
# Paid, Fast API. Only does web search. Get API key from https://serper.dev/
# - SERPER_DEV_API_KEY=your_serper_dev_api_key
# Paid, Fast, Open API. Only does webpage read. Get API key from https://firecrawl.dev/
# - FIRECRAWL_API_KEY=your_firecrawl_api_key
# Paid, Fast, Higher Read Success API. Only does webpage read. Get API key from https://olostep.com/
# - OLOSTEP_API_KEY=your_olostep_api_key
#
# Uncomment the necessary lines below to make your instance publicly accessible.
# Proceed with caution, especially if you are using anonymous mode.
# ---
# - KHOJ_NO_HTTPS=True
# Replace the KHOJ_DOMAIN with the server's externally accessible domain or I.P address from a remote machie (no http/https prefix).
# Ensure this is set correctly to avoid CSRF trusted origin or unset cookie issue when trying to access the admin panel.
# - KHOJ_DOMAIN=192.168.0.104
command: --host="0.0.0.0" --port=42110 -vv --anonymous-mode
# - KHOJ_DOMAIN=khoj.example.com
# Replace the KHOJ_ALLOWED_DOMAIN with the server's internally accessible domain or I.P address on the host machine (no http/https prefix).
# Only set if using a load balancer/reverse_proxy in front of your Khoj server. If unset, it defaults to KHOJ_DOMAIN.
# For example, if the load balancer service is added to the khoj docker network, set KHOJ_ALLOWED_DOMAIN to khoj's docker service name: `server'.
# - KHOJ_ALLOWED_DOMAIN=server
# - KHOJ_ALLOWED_DOMAIN=127.0.0.1
# Uncomment the line below to disable telemetry.
# Telemetry helps us prioritize feature development and understand how people are using Khoj
# Read more at https://docs.khoj.dev/miscellaneous/telemetry
# - KHOJ_TELEMETRY_DISABLE=True
# Comment out this line when you're using the official ghcr.io/khoj-ai/khoj-cloud:latest prod image.
command: --host="0.0.0.0" --port=42110 -vv --anonymous-mode --non-interactive
volumes:
khoj_config:
khoj_db:
khoj_models:
khoj_search:

Binary file not shown.

After

Width:  |  Height:  |  Size: 511 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 500 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 232 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 298 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 868 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 333 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 214 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 71 KiB

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 336 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.9 KiB

View File

@@ -0,0 +1,77 @@
# Admin Panel
> Describes the Khoj settings configurable via the admin panel
By default, you admin panel is available at `http://localhost:42110/server/admin/`. You can access the admin panel by logging in with your admin credentials (this would be your `KHOJ_ADMIN_EMAIL` and `KHOJ_ADMIN_PASSWORD`). The admin panel allows you to configure various settings for your Khoj server.
## App Settings
### Agents
Add all the agents you want to use for your different use-cases like Writer, Researcher, Therapist etc.
- `Personality`: This is a prompt to tell the chat model how to tune the personality of the agent.
- `Chat model`: The chat model to use for the agent.
- `Name`: The name of the agent. This field helps give the agent a unique identity across the app.
- `Avatar`: Url to the agents profile picture. It help give the agent a unique visual identity across the app.
- `Style color`, `Style icon`: These fields help give the agent a unique, visually identifiable identity across the app.
- `Slug`: This is the agent name to use in urls.
- `Public`: Check this if the agent is expected to be visible to all users on this Khoj server.
- `Managed by admin`: Check this if the agent is managed by admin, not by any user.
- `Creator`: The user who created the agent.
- `Tools`: The list of tools available to this agent. Tools include notes, image, online. This field is not currently configurable and only supports all tools (i.e `["*"]`)
### Chat Model Options
Add all the chat models you want to try, use and switch between for your different use-cases. For each chat model you add:
- `Chat model`: The name of an [OpenAI](https://platform.openai.com/docs/models), [Anthropic](https://docs.anthropic.com/en/docs/about-claude/models#model-names), [Gemini](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models#gemini-models) or [Offline](https://huggingface.co/models?pipeline_tag=text-generation&library=gguf) chat model.
- `Model type`: The chat model provider like `OpenAI`, `Offline`.
- `Vision enabled`: Set to `true` if your model supports vision. This is currently only supported for vision capable OpenAI models like `gpt-4o`
- `Max prompt size`, `Subscribed max prompt size`: These are optional fields. They are used to truncate the context to the maximum context size that can be passed to the model. This can help with accuracy and cost-saving.<br />
- `Tokenizer`: This is an optional field. It is used to accurately count tokens and truncate context passed to the chat model to stay within the models max prompt size.
![example configuration for chat model options](/img/example_chatmodel_option.png)
### Server Chat Settings
The server chat settings are used as:
1. The default chat models for subscribed (`Advanced` field) and unsubscribed (`Default` field) users.
2. The chat model for all intermediate steps like intent detection, web search etc. during chat response generation.
If a server chat setting is not added the first ChatModelOption in your config is used as the default chat model.
To add a server chat setting:
- Set your preferred default chat models in the `Default` fields of your [ServerChatSettings](http://localhost:42110/server/admin/database/serverchatsettings/)
- The `Advanced` field doesn't need to be set when self-hosting. When unset, the `Default` chat model is used for all users and the intermediate steps.
### AI Model API
These settings configure APIs to interact with AI models.
For each AI Model API you [add](http://localhost:42110/server/admin/database/aimodelapi/add):
- `Api key`: Set to your [OpenAI](https://platform.openai.com/api-keys), [Anthropic](https://console.anthropic.com/account/keys) or [Gemini](https://aistudio.google.com/app/apikey) API keys.
- `Name`: Give the configuration any friendly name like `OpenAI`, `Gemini`, `Anthropic`.
- `Api base url`: Set the API base URL. This is only relevant to set if you're using another OpenAI-compatible proxy server like [Ollama](/advanced/ollama) or [LMStudio](/advanced/lmstudio).
![example configuration for ai model api](/img/example_openai_processor_config.png)
### Search Model Configs
Search models are used to generate vector embeddings of your documents for natural language search and chat. You can choose any [embeddings models on HuggingFace](https://huggingface.co/models?pipeline_tag=sentence-similarity) to create vector embeddings of your documents for natural language search and chat.
<img src="/img/example_search_model_admin_settings.png" alt="Example Search Model Settings" style={{width: 500}} />
### Text to Image Model Options
Add text to image generation models with these settings. Khoj currently supports text to image models available via OpenAI, Stability or Replicate API
- `api-key`: Set to your OpenAI, Stability or Replicate API key
- `model`: Set the model name available over the selected model provider
- `model-type`: Set to the appropriate model provider
- `openai-config`: For image generation models available via OpenAI (compatible) API you can set the appropriate OpenAI Processor Conversation Settings instead of specifying the `api-key` field above
### Speech to Text Model Options
Add speech to text models with these settings. Khoj currently only supports whisper speech to text model via OpenAI API or Offline
### Voice Model Options
Add text to speech models with these settings. Khoj currently supports models from [ElevenLabs](https://elevenlabs.io/).
### Reflective Questions
This is a static list of starter question suggestions for each user. It is not current used in any client app. It used to be shown on the web app home page. We may turn it into a dynamic list of starter questions personalized to each users, say based on their recent conversations or synced knowledge base.
## User Data
- Users, Entrys, Conversations, Subscriptions, Github configs, Notion configs, User search configs, User conversation configs, User voice configs
## Miscellaneous Data
- Process Locks: Persistent Locks for Automations
- Client Applications:
Client applications allow you to setup third party applications that can query your Khoj server using a client application ID + secret. The secret would go in a bearer token.

View File

@@ -1,52 +0,0 @@
# Authenticate
:::info
This is only helpful for self-hosted users or teams. If you're using [Khoj Cloud](https://app.khoj.dev), both Magic Links and Google OAuth work.
:::
By default, most of the instructions for self-hosting Khoj assume a single user, and so the default configuration is to run in anonymous mode. However, if you want to enable authentication, you can do so either with with [Magic Links](#using-magic-links) or [Google OAuth](#using-google-oauth) as shown below. This can be helpful to make Khoj securely accessible to you and your team.
:::tip[Note]
Remove the `--anonymous-mode` flag in your start up command to enable authentication.
:::
## Using Magic Links
The most secure way to do this is to integrate with [Resend](https://resend.com) by setting up an account and adding an environment variable for `RESEND_API_KEY`. You can get your API key [here](https://resend.com/api-keys). This will allow you to automatically send sign-in links to users who want to log in.
It's still possible to use the magic links feature without Resend, but you'll need to manually send the magic links to users who want to log in.
## Manually sending magic links
1. The user will have to enter their email address in the login form.
They'll click `Send Magic Link`. Without the Resend API key, this will just create an unverified account for them in the backend
<img src="/img/magic_link.png" alt="Magic link login form" width="400"/>
2. You can get their magic link using the admin panel
Go to the [admin panel](http://localhost:42110/server/admin/database/khojuser/). You'll see a list of users. Search for the user you want to send a magic link to. Tick the checkbox next to their row, and use the action drop down at the top to 'Get email login URL'. This will generate a magic link that you can send to the user, which will appear at the top of the admin interface.
| Get email login URL | Retrieved login URL |
|---------------------|---------------------|
| <img src="/img/admin_get_emali_login.png" alt="Get user magic sign in link" width="400" />| <img src="/img/admin_successful_login_url.png" alt="Successfully retrieved a login URL" width="400" />|
3. Send the magic link to the user. They can click on it to log in.
Once they click on the link, they'll automatically be logged in. They'll have to repeat this process for every new device they want to log in from, but they shouldn't have to repeat it on the same device.
A given magic link can only be used once. If the user tries to use it again, they'll be redirected to the login page to get a new magic link.
## Using Google OAuth
To set up your self-hosted Khoj with Google Auth, you need to create a project in the Google Cloud Console and enable the Google Auth API.
To implement this, you'll need to:
1. You must use the `python` package or build from source, because you'll need to install additional packages for the google auth libraries (`prod`). The syntax to install the right packages is
```
pip install khoj[prod]
```
2. [Create authorization credentials](https://developers.google.com/identity/sign-in/web/sign-in) for your application.
3. Open your [Google cloud console](https://console.developers.google.com/apis/credentials) and create a configuration like below for the relevant `OAuth 2.0 Client IDs` project:
![Google auth login project settings](https://github.com/khoj-ai/khoj/assets/65192171/9bcbf6f4-197d-4d0c-973a-c10b1331c892)
4. Configure these environment variables: `GOOGLE_CLIENT_SECRET`, and `GOOGLE_CLIENT_ID`. You can find these values in the Google cloud console, in the same place where you configured the authorized origins and redirect URIs.
That's it! That should be all you have to do. Now, when you reload Khoj without `--anonymous-mode`, you should be able to use your Google account to sign in.

View File

@@ -0,0 +1,74 @@
# Authenticate (Multi-User Setup)
```mdx-code-block
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
```
By default, most of the instructions for self-hosting Khoj assume a single user, and so the default configuration is to run in anonymous mode. However, if you want to enable authentication, you can do so either with with [Magic Links](#using-magic-links) or [Google OAuth](#using-google-oauth) as shown below. This can be helpful to make Khoj securely accessible to you and your team.
:::tip[Note]
Remove the `--anonymous-mode` flag from your khoj start up command or docker-compose file to enable authentication.
:::
## Using Magic Links
The most secure way to do this is to integrate with [Resend](https://resend.com).
1. Setup your account at https://resend.com
2. Set an environment variable for `RESEND_API_KEY`. You can get your API key [here](https://resend.com/api-keys).
3. Set an environment variable for `RESEND_EMAIL`. This is the email address that will show up in your `from` field when sending magic links.
This will allow you to automatically send sign-in links to users who want to log in.
It's still possible to use the magic links feature without Resend, but you'll need to manually send the magic links to users who want to log in.
## Manually sending magic links
1. The user will have to enter their email address in the login page at http://localhost:42110/login.
They'll click `Get Login Link`. Without the Resend API key, this will just create an unverified account for them in the backend
<img src="/img/magic_link.png" alt="Magic link login form" width="400"/>
2. You can get their magic link using the admin panel
Go to the [admin panel](http://localhost:42110/server/admin/database/khojuser/). You'll see a list of users. Search for the user you want to send a magic link to. Tick the checkbox next to their row, and use the action drop down at the top to 'Get email login URL'. This will generate a magic link that you can send to the user, which will appear at the top of the admin interface.
| Get email login URL | Retrieved login URL |
|---------------------|---------------------|
| <img src="/img/admin_get_emali_login.png" alt="Get user magic sign in link" width="400" />| <img src="/img/admin_successful_login_url.png" alt="Successfully retrieved a login URL" width="400" />|
3. Send the magic link to the user. They can click on it to log in.
Once they click on the link, they'll automatically be logged in. They'll have to repeat this process for every new device they want to log in from, but they shouldn't have to repeat it on the same device.
A given magic link can only be used once. If the user tries to use it again, they'll be redirected to the login page to get a new magic link.
## Using Google OAuth
For this method, you'll need to use the prod version of the Khoj package. You can install it as below:
<Tabs groupId="server" queryString>
<TabItem value="docker" label="Docker">
Update your `docker-compose.yml` to use the prod image
```bash
image: ghcr.io/khoj-ai/khoj-cloud:latest
```
</TabItem>
<TabItem value="pip" label="Pip">
```bash
pip install khoj[prod]
```
</TabItem>
</Tabs>
To set up your self-hosted Khoj with Google Auth, you need to create a project in the Google Cloud Console and enable the Google Auth API.
To implement this, you'll need to:
1. [Create authorization credentials](https://developers.google.com/identity/sign-in/web/sign-in) for your application.
2. Open your [Google cloud console](https://console.developers.google.com/apis/credentials) and create a configuration like below for the relevant `OAuth 2.0 Client IDs` project:
![Google auth login project settings](https://github.com/khoj-ai/khoj/assets/65192171/9bcbf6f4-197d-4d0c-973a-c10b1331c892)
3. Configure these environment variables: `GOOGLE_CLIENT_SECRET`, and `GOOGLE_CLIENT_ID`. You can find these values in the Google cloud console, in the same place where you configured the authorized origins and redirect URIs.
That's it! That should be all you have to do. Now, when you reload Khoj without `--anonymous-mode`, you should be able to use your Google account to sign in.

View File

@@ -0,0 +1,26 @@
# Google Vertex AI
:::info
This is only helpful for self-hosted users. If you're using [Khoj Cloud](https://app.khoj.dev), you can directly use any of the pre-configured AI models.
:::
Khoj can use Google's Gemini and Anthropic's Claude family of AI models from [Vertex AI](https://cloud.google.com/vertex-ai) on Google Cloud. Explore Anthropic and Gemini AI models available on Vertex AI's [Model Garden](https://console.cloud.google.com/vertex-ai/model-garden).
## Setup
1. Follow [these instructions](https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude#before_you_begin) to use models on GCP Vertex AI.
2. Create [Service Account](https://console.cloud.google.com/apis/credentials/serviceaccountkey) credentials.
- Download the credentials keyfile in json format.
- Base64 encode the credentials json keyfile. For example by running the following command from your terminal:
`base64 -i <service_account_credentials_keyfile.json>`
3. Create a new [API Model API](http://localhost:42110/server/admin/database/aimodelapi/add) on your Khoj admin panel.
- **Name**: `Google Vertex` (or whatever friendly name you prefer).
- **Api Key**: `base64 encoded json keyfile` from step 2.
- **Api Base Url**: `https://{MODEL_GCP_REGION}-aiplatform.googleapis.com/v1/projects/{YOUR_GCP_PROJECT_ID}`
- MODEL_GCP_REGION: A region the AI model is available in. For example `us-east5` works for [Claude](https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude#regions).
- YOUR_GCP_PROJECT_ID: Get your project id from the [Google cloud dashboard](https://console.cloud.google.com/home/dashboard)
4. Create a new [Chat Model](http://localhost:42110/server/admin/database/chatmodel/add) on your Khoj admin panel.
- **Name**: `claude-3-7-sonnet@20250219`. Any Claude or Gemini model on Vertex's Model Garden should work.
- **Model Type**: `Anthropic` or `Google`
- **Ai Model API**: *the Google Vertex Ai Model API you created in step 3*
- **Max prompt size**: `60000` (replace with the max prompt size of your model)
- **Tokenizer**: *Do not set*
5. Select the chat model on [your settings page](http://localhost:42110/settings) and start a conversation.

View File

@@ -21,17 +21,14 @@ Using LiteLLM with Khoj makes it possible to turn any LLM behind an API into you
export MISTRAL_API_KEY=<MISTRAL_API_KEY>
litellm --model mistral/mistral-tiny --drop_params
```
3. Create a new [OpenAI Processor Conversation Config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) on your Khoj admin panel
- Name: `proxy-name`
- Api Key: `any string`
- Api Base Url: **URL of your Openai Proxy API**
4. Create a new [Chat Model Option](http://localhost:42110/server/admin/database/chatmodeloptions/add) on your Khoj admin panel.
- Name: `llama3.1` (replace with the name of your local model)
- Model Type: `Openai`
- Openai Config: `<the proxy config you created in step 3>`
- Max prompt size: `20000` (replace with the max prompt size of your model)
- Tokenizer: *Do not set for OpenAI, Mistral, Llama3 based models*
5. Create a new [Server Chat Setting](http://localhost:42110/server/admin/database/serverchatsettings/add/) on your Khoj admin panel
- Default model: `<name of chat model option you created in step 4>`
- Summarizer model: `<name of chat model option you created in step 4>`
6. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown.
3. Create a new [API Model API](http://localhost:42110/server/admin/database/aimodelapi/add) on your Khoj admin panel
- **Name**: `litellm`
- **Api Key**: `any string`
- **Api Base Url**: `<URL of your Openai Proxy API>`
4. Create a new [Chat Model](http://localhost:42110/server/admin/database/chatmodel/add) on your Khoj admin panel.
- **Name**: `llama3.1` (replace with the name of your local model)
- **Model Type**: `Openai`
- **Ai Model Api**: *the litellm Ai Model API you created in step 3*
- **Max prompt size**: `20000` (replace with the max prompt size of your model)
- **Tokenizer**: *Do not set for OpenAI, Mistral, Llama3 based models*
5. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown.

View File

@@ -1,4 +1,8 @@
# LM Studio
:::warning[Unsupported]
Khoj does not work with LM Studio anymore. Khoj leverages [json mode](https://platform.openai.com/docs/guides/structured-outputs#json-mode) extensively but LMStudio's API seems to have dropped support for json mode. [1](https://x.com/lmstudio/status/1770135858709975547), [2](https://lmstudio.ai/docs/api/structured-output)
:::
:::info
This is only helpful for self-hosted users. If you're using [Khoj Cloud](https://app.khoj.dev), you're limited to our first-party models.
:::
@@ -14,17 +18,14 @@ LM Studio can expose an [OpenAI API compatible server](https://lmstudio.ai/docs/
## Setup
1. Install [LM Studio](https://lmstudio.ai/) and download your preferred Chat Model
2. Go to the Server Tab on LM Studio, Select your preferred Chat Model and Click the green Start Server button
3. Create a new [OpenAI Processor Conversation Config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) on your Khoj admin panel
- Name: `proxy-name`
- Api Key: `any string`
- Api Base Url: `http://localhost:1234/v1/` (default for LMStudio)
4. Create a new [Chat Model Option](http://localhost:42110/server/admin/database/chatmodeloptions/add) on your Khoj admin panel.
- Name: `llama3.1` (replace with the name of your local model)
- Model Type: `Openai`
- Openai Config: `<the proxy config you created in step 3>`
- Max prompt size: `20000` (replace with the max prompt size of your model)
- Tokenizer: *Do not set for OpenAI, mistral, llama3 based models*
5. Create a new [Server Chat Setting](http://localhost:42110/server/admin/database/serverchatsettings/add/) on your Khoj admin panel
- Default model: `<name of chat model option you created in step 4>`
- Summarizer model: `<name of chat model option you created in step 4>`
6. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown.
3. Create a new [AI Model API](http://localhost:42110/server/admin/database/aimodelapi/add/) on your Khoj admin panel
- **Name**: `lmstudio`
- **Api Key**: `any string`
- **Api Base Url**: `http://localhost:1234/v1/` (default for LMStudio)
4. Create a new [Chat Model](http://localhost:42110/server/admin/database/chatmodel/add) on your Khoj admin panel.
- **Name**: `llama3.1` (replace with the name of your local model)
- **Model Type**: `Openai`
- **Ai Model Api**: *the lmstudio Ai Model Api you created in step 3*
- **Max prompt size**: `20000` (replace with the max prompt size of your model)
- **Tokenizer**: *Do not set for OpenAI, mistral, llama3 based models*
5. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown.

View File

@@ -1,36 +0,0 @@
# Ollama
:::info
This is only helpful for self-hosted users. If you're using [Khoj Cloud](https://app.khoj.dev), you're limited to our first-party models.
:::
:::info
Khoj natively supports local LLMs [available on HuggingFace in GGUF format](https://huggingface.co/models?library=gguf). Using an OpenAI API proxy with Khoj maybe useful for ease of setup, trying new models or using commercial LLMs via API.
:::
Ollama allows you to run [many popular open-source LLMs](https://ollama.com/library) locally from your terminal.
For folks comfortable with the terminal, Ollama's terminal based flows can ease setup and management of chat models.
Ollama exposes a local [OpenAI API compatible server](https://github.com/ollama/ollama/blob/main/docs/openai.md#models). This makes it possible to use chat models from Ollama to create your personal AI agents with Khoj.
## Setup
1. Setup Ollama: https://ollama.com/
2. Start your preferred model with Ollama. For example,
```bash
ollama run llama3.1
```
3. Create a new [OpenAI Processor Conversation Config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) on your Khoj admin panel
- Name: `ollama`
- Api Key: `any string`
- Api Base Url: `http://localhost:11434/v1/` (default for Ollama)
4. Create a new [Chat Model Option](http://localhost:42110/server/admin/database/chatmodeloptions/add) on your Khoj admin panel.
- Name: `llama3.1` (replace with the name of your local model)
- Model Type: `Openai`
- Openai Config: `<the ollama config you created in step 3>`
- Max prompt size: `20000` (replace with the max prompt size of your model)
5. Create a new [Server Chat Setting](http://localhost:42110/server/admin/database/serverchatsettings/add/) on your Khoj admin panel
- Default model: `<name of chat model option you created in step 4>`
- Summarizer model: `<name of chat model option you created in step 4>`
6. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown.
That's it! You should now be able to chat with your Ollama model from Khoj. If you want to add additional models running on Ollama, repeat step 6 for each model.

View File

@@ -0,0 +1,78 @@
# Ollama
```mdx-code-block
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
```
:::info
This is only helpful for self-hosted users. If you're using [Khoj Cloud](https://app.khoj.dev), you can use our first-party supported models.
:::
:::info
Khoj can directly run local LLMs [available on HuggingFace in GGUF format](https://huggingface.co/models?library=gguf). The integration with Ollama is useful to run Khoj on Docker and have the chat models use your GPU or to try new models via CLI.
:::
Ollama allows you to run [many popular open-source LLMs](https://ollama.com/library) locally from your terminal.
For folks comfortable with the terminal, Ollama's terminal based flows can ease setup and management of chat models.
Ollama exposes a local [OpenAI API compatible server](https://github.com/ollama/ollama/blob/main/docs/openai.md#models). This makes it possible to use chat models from Ollama with Khoj.
## Setup
:::info
Restart your Khoj server after first run or update to the settings below to ensure all settings are applied correctly.
:::
<Tabs groupId="type" queryString>
<TabItem value="first-run" label="First Run">
<Tabs groupId="server" queryString>
<TabItem value="docker" label="Docker">
1. Setup Ollama: https://ollama.com/
2. Download your preferred chat model with Ollama. For example,
```bash
ollama pull llama3.1
```
3. Uncomment `OPENAI_BASE_URL` environment variable in your downloaded Khoj [docker-compose.yml](https://github.com/khoj-ai/khoj/blob/master/docker-compose.yml#:~:text=OPENAI_BASE_URL)
4. Start Khoj docker for the first time to automatically integrate and load models from the Ollama running on your host machine
```bash
# run below command in the directory where you downloaded the Khoj docker-compose.yml
docker-compose up
```
</TabItem>
<TabItem value="pip" label="Pip">
1. Setup Ollama: https://ollama.com/
2. Download your preferred chat model with Ollama. For example,
```bash
ollama pull llama3.1
```
3. Set `OPENAI_BASE_URL` environment variable to `http://localhost:11434/v1/` in your shell before starting Khoj for the first time
```bash
export OPENAI_BASE_URL="http://localhost:11434/v1/"
khoj --anonymous-mode
```
</TabItem>
</Tabs>
</TabItem>
<TabItem value="update" label="Update">
1. Setup Ollama: https://ollama.com/
2. Download your preferred chat model with Ollama. For example,
```bash
ollama pull llama3.1
```
3. Create a new [AI Model API](http://localhost:42110/server/admin/database/aimodelapi/add) on your Khoj admin panel
- **Name**: `ollama`
- **Api Key**: `any string`
- **Api Base Url**: `http://localhost:11434/v1/` (default for Ollama)
4. Create a new [Chat Model](http://localhost:42110/server/admin/database/chatmodel/add) on your Khoj admin panel.
- **Name**: `llama3.1` (replace with the name of your local model)
- **Model Type**: `Openai`
- **AI Model API**: *the ollama AI Model API you created in step 3*
- **Max prompt size**: `20000` (replace with the max prompt size of your model)
5. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown.
If you want to add additional models running on Ollama, repeat step 4 for each model.
</TabItem>
</Tabs>
That's it! You should now be able to chat with your Ollama model from Khoj.

View File

@@ -0,0 +1,20 @@
# Remote Access
By default self-hosted Khoj is only accessible on the machine it is running. To securely access it from a remote machine:
- Set the `KHOJ_DOMAIN` environment variable to your remotely accessible ip or domain via shell or docker-compose.yml.
Examples: `KHOJ_DOMAIN=my.khoj-domain.com`, `KHOJ_DOMAIN=192.168.0.4`.
- Ensure the Khoj Admin password and `KHOJ_DJANGO_SECRET_KEY` environment variable are securely set.
- Setup [Authentication](/advanced/authentication).
- Open access to the Khoj port (default: 42110) from your OS and Network firewall.
:::warning[Use HTTPS certificate]
To expose Khoj on a custom domain over the public internet, use of an SSL certificate is strongly recommended. You can use [Let's Encrypt](https://letsencrypt.org/) to get a free SSL certificate for your domain.
To disable HTTPS, set the `KHOJ_NO_HTTPS` environment variable to `True`. This can be useful if Khoj is only accessible behind a secure, private network.
:::
:::info[Try Tailscale]
You can use [Tailscale](https://tailscale.com/) for easy, secure access to your self-hosted Khoj over the network.
1. Set `KHOJ_DOMAIN` to your machines [tailscale ip](https://tailscale.com/kb/1452/connect-to-devices#identify-your-devices) or [fqdn on tailnet](https://tailscale.com/kb/1081/magicdns#fully-qualified-domain-names-vs-machine-names). E.g `KHOJ_DOMAIN=100.4.2.0` or `KHOJ_DOMAIN=khoj.tailfe8c.ts.net`
2. Access Khoj by opening `http://tailscale-ip-of-server:42110` or `http://fqdn-of-server:42110` from any device on your tailscale network
:::

View File

@@ -0,0 +1,39 @@
# Tailscale
:::info
This is only helpful for secure cross-device access to **self-hosted** Khoj. You **do not** need this if you're using [Khoj Cloud](https://app.khoj.dev).
:::
[Tailscale](https://tailscale.com) simplifies creating a private VPN using [Wireguard](https://www.wireguard.com/) and OAuth. So you can host and access services on your devices from anywhere.
The instructions below are one way to simply and securely access your self-hosted Khoj from your phone, laptop etc.
### Minimal Setup
1. Setup khoj on your preferred machine following the [standard steps](/get-started/setup)
2. Sign-up to [Tailscale](https://tailscale.com) and install the app on machines you want to access Khoj from. This usually includes your khoj server, your phone, laptop. Note the tailscale i.p of your khoj server.
3. Start khoj on your server by including the flag `--host <your_server_tailscale_ip>`
4. Open `http://<your_server_tailscale_ip>:42110` to access khoj from any device on your tailscale network!
### HTTPS Certificate
:::info
Tailscale uses Wireguard to encrypt and route traffic between your machines. So HTTPS isn't required with Tailscale for secure access. HTTPS with Tailscale is only useful for browsers to not complain about security and block certain features like clipboard access unless HTTPS is enabled.
:::
1. Enable [MagicDNS](https://tailscale.com/kb/1081/magicdns#enabling-magicdns) and [HTTPS](https://tailscale.com/kb/1153/enabling-https) toggle on your tailscale admin console [DNS](https://login.tailscale.com/admin/dns) page. Note your unique tailscale domain name (usually ends with .ts.net)
2. Create an https certificate for your Khoj server by running the following command:
```bash
# Assuming the server is named, `server` and your tailnet is `black-forest.ts.net`
# Note path of the .crt and .key files generated
tailscale cert server.black-forest.ts.net
```
3. Start khoj to be served via https on standard port
```bash
sudo KHOJ_DOMAIN=server.black-forest.ts.net \
khoj \
--sslcert /path/to/your/tailscale.crt \
--sslkey path/to/your/tailscale.key \
--host=server.black-forest.ts.net \
--port 443
```
4. You should now be able to access khoj on `https://server.black-forest.ts.net` from any device on your private tailscale network!

View File

@@ -11,8 +11,8 @@ This is only helpful for self-hosted users. If you're using [Khoj Cloud](https:/
Khoj natively supports local LLMs [available on HuggingFace in GGUF format](https://huggingface.co/models?library=gguf). Using an OpenAI API proxy with Khoj maybe useful for ease of setup, trying new models or using commercial LLMs via API.
:::
Khoj can use any OpenAI API compatible server including [Ollama](/advanced/ollama), [LMStudio](/advanced/lmstudio) and [LiteLLM](/advanced/litellm).
Configuring this allows you to use non-standard, open or commercial, local or hosted LLM models for Khoj
Khoj can use any OpenAI API compatible server including local providers like [Ollama](/advanced/ollama), [LMStudio](/advanced/lmstudio) and [LiteLLM](/advanced/litellm) and commercial providers like [HuggingFace](https://huggingface.co/docs/api-inference/tasks/chat-completion#using-the-api), [OpenRouter](https://openrouter.ai/docs/quick-start) etc.
Configuring this allows you to use non-standard, open or commercial, local or hosted LLM models for Khoj.
Combine them with Khoj can turn your favorite LLM into an AI agent. Allowing you to chat with your docs, find answers from the internet, build custom agents and run automations.
@@ -20,18 +20,15 @@ For specific integrations, see our [Ollama](/advanced/ollama), [LMStudio](/advan
## General Setup
1. Start your preferred OpenAI API compatible app
3. Create a new [OpenAI Processor Conversation Config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) on your Khoj admin panel
- Name: `proxy-name`
- Api Key: `any string`
- Api Base Url: **URL of your Openai Proxy API**
4. Create a new [Chat Model Option](http://localhost:42110/server/admin/database/chatmodeloptions/add) on your Khoj admin panel.
- Name: `llama3` (replace with the name of your local model)
- Model Type: `Openai`
- Openai Config: `<the proxy config you created in step 3>`
- Max prompt size: `2000` (replace with the max prompt size of your model)
- Tokenizer: *Do not set for OpenAI, mistral, llama3 based models*
5. Create a new [Server Chat Setting](http://localhost:42110/server/admin/database/serverchatsettings/add/) on your Khoj admin panel
- Default model: `<name of chat model option you created in step 4>`
- Summarizer model: `<name of chat model option you created in step 4>`
6. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown.
1. Start your preferred OpenAI API compatible app locally or get API keys from commercial AI model providers.
3. Create a new [API Model API](http://localhost:42110/server/admin/database/aimodelapi/add) on your Khoj admin panel
- **Name**: `any name`
- **Api Key**: `any string`
- **Api Base Url**: *The URL of your Openai Compatible API*
3. Create a new [Chat Model](http://localhost:42110/server/admin/database/chatmodel/add) on your Khoj admin panel.
- **Name**: `llama3` (replace with the name of your local model)
- **Model Type**: `Openai`
- **Ai Model Api**: *The AI Model API you created in step 2*
- **Max prompt size**: `2000` (replace with the max prompt size of your model)
- **Tokenizer**: *Do not set for OpenAI, mistral, llama3 based models*
4. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown.

View File

@@ -4,31 +4,41 @@ sidebar_position: 1
# Desktop
> Query your Second Brain from your machine
> Upload your knowledge base to Khoj and chat with your whole corpus
Use the Desktop app to chat and search with Khoj.
You can also share your files, folders with Khoj using the app.
## Companion App
Share your files, folders with Khoj using the app.
Khoj will keep these files in sync to provide contextual responses when you search or chat.
## Features
- **Chat**
- **Faster answers**: Find answers quickly, from your private notes or the public internet
- **Assisted creativity**: Smoothly weave across retrieving answers and generating content
- **Iterative discovery**: Iteratively explore and re-discover your notes
- **Quick access**: Use [Khoj Mini](/features/khoj_mini) on the desktop to quickly pull up a mini chat module for quicker answers
- **Search**
- **Natural**: Advanced natural language understanding using Transformer based ML Models
- **Incremental**: Incremental search for a fast, search-as-you-type experience
## Setup
:::info[Self Hosting]
If you are self-hosting the Khoj server, update the *Settings* page on the Khoj Desktop app to:
- Set the `Khoj URL` field to your Khoj server URL. By default, use `http://127.0.0.1:42110`.
- Do not set the `Khoj API Key` field if your Khoj server runs in anonymous mode. For example, `khoj --anonymous-mode`
:::
1. Install the [Khoj Desktop app](https://khoj.dev/downloads) for your OS
2. Generate an API key on the [Khoj Web App](https://app.khoj.dev/settings#clients)
3. Set your Khoj API Key on the *Settings* page of the Khoj Desktop app
4. [Optional] Add any files, folders you'd like Khoj to be aware of on the *Settings* page and Click *Save*
4. [Optional] Add any files, folders you'd like Khoj to be aware of on the *Settings* page and Click *Save*.
These files and folders will be automatically kept in sync for you
## Interface
| Chat | Search |
|:----:|:------:|
| ![](/img/khoj_chat_on_desktop.png) | ![](/img/khoj_search_on_desktop.png) |
# Main App
You can also install the Khoj application on your desktop as a progressive web app.
1. Open the [Khoj Web App](https://app.khoj.dev) in Chrome.
2. Click on the install button in the address bar to install the app. You must be logged into your Chrome browser for this to work.
![progressive web app install icon](/img/pwa_install_desktop.png)
Alternatively, you can also install using this route:
1. Open the three-dot menu in the top right corner of the browser.
2. Go to 'Cast, Save, and Share' option.
3. Click on the "Open in Khoj" option.
![progressive web app install route](/img/chrome_pwa_alt.png)

View File

@@ -30,6 +30,12 @@ sidebar_position: 2
| ![khoj search on emacs](/img/khoj_search_on_emacs.png) | ![khoj chat on emacs](/img/khoj_chat_on_emacs.png) |
## Setup
:::info[Self Hosting]
If you are self-hosting the Khoj server modify the install steps below:
- Set `khoj-server-url` to your Khoj server URL. By default, use `http://127.0.0.1:42110`.
- Do not set `khoj-api-key` if your Khoj server runs in anonymous mode. For example, `khoj --anonymous-mode`
:::
1. Generate an API key on the [Khoj Web App](https://app.khoj.dev/settings#clients)
2. Add below snippet to your Emacs config file, usually at `~/.emacs.d/init.el`
@@ -43,6 +49,7 @@ M-x package-install khoj
; Set your Khoj API key
(setq khoj-api-key "YOUR_KHOJ_CLOUD_API_KEY")
(setq khoj-server-url "https://app.khoj.dev")
```
#### **Minimal Install**
@@ -54,7 +61,8 @@ M-x package-install khoj
:ensure t
:pin melpa-stable
:bind ("C-c s" . 'khoj)
:config (setq khoj-api-key "YOUR_KHOJ_CLOUD_API_KEY"))
:config (setq khoj-api-key "YOUR_KHOJ_CLOUD_API_KEY"
khoj-server-url "https://app.khoj.dev"))
```
#### **Standard Install**
@@ -67,8 +75,9 @@ M-x package-install khoj
:pin melpa-stable
:bind ("C-c s" . 'khoj)
:config (setq khoj-api-key "YOUR_KHOJ_CLOUD_API_KEY"
khoj-org-directories '("~/docs/org-roam" "~/docs/notes")
khoj-org-files '("~/docs/todo.org" "~/docs/work.org")))
khoj-server-url "https://app.khoj.dev"
khoj-index-directories '("~/docs/org-roam" "~/docs/notes")
khoj-index-files '("~/docs/todo.org" "~/docs/work.org")))
```
#### **Straight.el**
@@ -81,6 +90,7 @@ M-x package-install khoj
:straight (khoj :type git :host github :repo "khoj-ai/khoj" :files (:defaults "src/interface/emacs/khoj.el"))
:bind ("C-c s" . 'khoj)
:config (setq khoj-api-key "YOUR_KHOJ_CLOUD_API_KEY"
khoj-server-url "https://app.khoj.dev"
khoj-org-directories '("~/docs/org-roam" "~/docs/notes")
khoj-org-files '("~/docs/todo.org" "~/docs/work.org")))
```
@@ -104,7 +114,7 @@ This feature finds entries similar to the one you are currently on.
2. Hit `C-c s f` (or `M-x khoj RET f`) to find similar entries
### Advanced Usage
- Add [query filters](https://github.com/khoj-ai/khoj/#query-filters) during search to narrow down results further
- Add [query filters](/miscellaneous/query-filters) during search to narrow down results further
e.g. `What is the meaning of life? -"god" +"none" dt>"last week"`
- Use `C-c C-o 2` to open the current result at cursor in its source org file

View File

@@ -20,11 +20,18 @@ sidebar_position: 3
- **Discover**: Find similar notes to the current one
## Setup
:::info[Self Hosting]
If you are self-hosting the Khoj server, update the Khoj Obsidian plugin settings step below:
- Set the `Khoj URL` field to your Khoj server URL. By default, use `http://127.0.0.1:42110`.
- Do not set the `Khoj API Key` field if your Khoj server runs in anonymous mode. For example, `khoj --anonymous-mode`
:::
1. Open [Khoj](https://obsidian.md/plugins?id=khoj) from the *Community plugins* tab in Obsidian settings panel
2. Click *Install*, then *Enable* on the Khoj plugin page in Obsidian
3. Generate an API key on the [Khoj Web App](https://app.khoj.dev/settings#clients)
4. Set your Khoj API Key in the Khoj plugin settings in Obsidian
1. Open [Khoj](https://obsidian.md/plugins?id=khoj) from the *Community plugins* tab in Obsidian settings panel
2. Click *Install*, then *Enable* on the Khoj plugin page in Obsidian
3. Generate an API key on the [Khoj Web App](https://app.khoj.dev/settings#clients)
4. Set your Khoj API Key in the Khoj plugin settings on Obsidian
5. (Optional) Click `Force Sync` in the Khoj plugin settings on Obsidian to immediately sync your Obsidian vault.
<br />By default, your Obsidian vault is automatically synced periodically.
See the official [Obsidian Plugin Docs](https://help.obsidian.md/Extending+Obsidian/Community+plugins) for more details on installing Obsidian plugins.
@@ -41,7 +48,7 @@ To see other notes similar to the current one, run *Khoj: Find Similar Notes* fr
### Search
Run *Khoj: Search* from the [Command Palette](https://help.obsidian.md/Plugins/Command+palette)
See [Khoj Search](/features/search) for more details. Use [query filters](/miscellaneous/advanced#query-filters) to limit entries to search
See [Khoj Search](/features/search) for more details. Use [query filters](/miscellaneous/query-filters) to limit entries to search
[search_demo](https://user-images.githubusercontent.com/6413477/218801155-cd67e8b4-a770-404a-8179-d6b61caa0f93.mp4 ':include :type=mp4')

View File

@@ -8,6 +8,10 @@ sidebar_position: 4
Without any desktop clients, you can start chatting with Khoj on the web. Bear in mind you do need one of the desktop clients in order to share and sync your data with Khoj.
Go see it here --> [Khoj Cloud](https://app.khoj.dev).
![](/img/khoj_web_app_home.png)
## Features
- **Chat**
- **Faster answers**: Find answers quickly, from your private notes or the public internet
@@ -25,7 +29,7 @@ You can upload documents to Khoj from the web interface, one at a time. This is
1. You can drag and drop the document into the chat window.
2. Or click the paperclip icon in the chat window and select the document from your file system.
![demo of dragging and dropping a file](https://assets.khoj.dev/drag_drop_file.gif)
![demo of dragging and dropping a file](https://assets.khoj.dev/home_page_data_upload.gif)
### Install on Phone
You can optionally install Khoj as a [Progressive Web App (PWA)](https://web.dev/learn/pwa/installation). This makes it quick and easy to access Khoj on your phone.
@@ -37,9 +41,3 @@ You can optionally install Khoj as a [Progressive Web App (PWA)](https://web.dev
| Step 1 | Step 2 | Step 3|
|:---:|:---:|:---:|
| ![](/img/pwa_install_1.png) | ![](/img/pwa_install_2.png) | ![](/img/pwa_install_3.png) |
## Interface
| Search | Chat |
|:------:|:----:|
| ![](/img/khoj_search_on_web.png) | ![](/img/khoj_chat_on_web.png) |

View File

@@ -6,13 +6,13 @@ sidebar_position: 5
> Query your Second Brain from WhatsApp
Text [+1 (848) 800 4242](https://wa.me/18488004242) or scan the QQ code below on your phone to chat with Khoj on WhatsApp.
Text [+1 (848) 800 4242](https://wa.me/18488004242) or scan the QR code below on your phone to chat with Khoj on WhatsApp.
Without any desktop clients, you can start chatting with Khoj on WhatsApp. Bear in mind you do need one of the desktop clients in order to share and sync your data with Khoj. The WhatsApp AI bot will work right away for answering generic queries and using Khoj in default mode.
In order to use Khoj on WhatsApp with your own data, you need to setup a Khoj Cloud account and connect your WhatsApp account to it. This is a one time setup and you can do it from the [Khoj Cloud config page](https://app.khoj.dev/settings).
If you hit usage limits for the WhatsApp bot, upgrade to [a paid plan](https://khoj.dev/pricing) on Khoj Cloud.
If you hit usage limits for the WhatsApp bot, upgrade to [a paid plan](https://khoj.dev/#pricing) on Khoj Cloud.
<img src="https://khoj-web-bucket.s3.amazonaws.com/khojwhatsapp.png" alt="WhatsApp QR Code" width="300" height="300" />
@@ -27,4 +27,4 @@ We have more commands under development, including `/share` to uploading documen
## Source Code
You can find all of the code for the WhatsApp bot in the the [flint repository](https://github.com/khoj-ai/flint). As all of our code, it is open source and you can contribute to it.
You can find all of the code for the WhatsApp bot in the [flint repository](https://github.com/khoj-ai/flint). As all of our code, it is open source and you can contribute to it.

View File

@@ -102,7 +102,19 @@ sudo -u postgres createdb khoj --password
</TabItem>
</Tabs>
#### 3. Run
#### 3. Build the front-end assets
```shell
cd src/interface/web/
yarn install
yarn export
```
You can optionally use `yarn dev` to start a development server for the front-end which will be available at http://localhost:3000. This is especially useful if you're making changes to the front-end code, but not necessary for running Khoj. Note that streaming does not work on the dev server due to how it is handled with SSR in Next.js.
Always run `yarn export` to test your front-end changes on http://localhost:42110 before creating a PR.
#### 4. Run
1. Start Khoj
```bash
khoj -vv
@@ -224,7 +236,7 @@ The core code for the Obsidian plugin is under `src/interface/obsidian`. The fil
4. Open the `khoj` folder in the file explorer that opens. You'll see a file called `main.js` in this folder. To test your changes, replace this file with the `main.js` file that was generated by the development server in the previous section.
## Create Khoj Release (Only for Maintainers)
Follow the steps below to [release](https://github.com/debanjum/khoj/releases/) Khoj. This will create a stable release of Khoj on [Pypi](https://pypi.org/project/khoj/), [Melpa](https://stable.melpa.org/#%252Fkhoj) and [Obsidian](https://obsidian.md/plugins?id%253Dkhoj). It will also create desktop apps of Khoj and attach them to the latest release.
Follow the steps below to [release](https://github.com/khoj-ai/khoj/releases/) Khoj. This will create a stable release of Khoj on [Pypi](https://pypi.org/project/khoj/), [Melpa](https://stable.melpa.org/#%252Fkhoj) and [Obsidian](https://obsidian.md/plugins?id%253Dkhoj). It will also create desktop apps of Khoj and attach them to the latest release.
1. Create and tag release commit by running the bump_version script. The release commit sets version number in required metadata files.
```shell
@@ -243,7 +255,7 @@ Follow the steps below to [release](https://github.com/debanjum/khoj/releases/)
## Visualize Codebase
*[Interactive Visualization](https://mango-dune-07a8b7110.1.azurestaticapps.net/?repo=debanjum%2Fkhoj)*
*[Interactive Visualization](https://mango-dune-07a8b7110.1.azurestaticapps.net/?repo=khoj-ai%2Fkhoj)*
![](/img/khoj_codebase_visualization_0.2.1.png)

View File

@@ -1,6 +1,10 @@
# Github integration
The Github integration allows you to index as many repositories as you want. It's currently default configured to index Issues, Commits, and all Markdown/Org files in each repository. For large repositories, this takes a fairly long time, but it works well for smaller projects.
:::warning[Unmaintained]
The Github integration is not maintained. We are considering deprecating it. It doesn't seem used by many folks and its cumbersome for us to maintain.
:::
The Github integration allows you to index as many repositories as you want. It's currently default configured to index all Markdown/Org/Text files in each repository. For large repositories, this takes a fairly long time, but it works well for smaller projects.
# Configure your settings
@@ -9,6 +13,6 @@ The Github integration allows you to index as many repositories as you want. It'
## Use the Github plugin
1. Generate a [classic PAT (personal access token)](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens) from [Github](https://github.com/settings/tokens) with `repo` and `admin:org` scopes at least.
2. Navigate to [https://app.khoj.dev/settings#github](https://app.khoj.dev/settings#github) to configure your Github settings. Enter in your PAT, along with details for each repository you want to index.
2. Navigate to [https://app.khoj.dev/settings#github](https://app.khoj.dev/settings/content/github) to configure your Github settings. Enter in your PAT, along with details for each repository you want to index.
3. Click `Save`. Go back to the settings page and click `Configure`.
4. Go to [https://app.khoj.dev/](https://app.khoj.dev/) and start searching!

View File

@@ -16,4 +16,4 @@ Go to https://app.khoj.dev/settings to connect your Notion workspace(s) to Khoj.
4. In the first step, you generated an API key. Use the newly generated API Key in your Khoj settings, by default at [http://localhost:42110/settings#notion](http://localhost:42110/settings#notion). Click `Save`.
5. Click `Configure` in http://localhost:42110/settings to index your Notion workspace(s).
That's it! You should be ready to start searching and chatting. Make sure you've configured your [chat settings](/get-started/setup#2-configure).
That's it! You should be ready to start searching and chatting. Make sure you've configured your [chat settings](/get-started/setup#use-khoj).

View File

@@ -9,8 +9,9 @@ keywords: ["upload data", "upload files", "share data", "share files", "pdf ai",
There are several ways you can get started with sharing your data with the Khoj AI.
- Drag and drop your documents via [the web UI](/clients/web/#upload-documents). This is best if you have a one-off document you need to interact with.
- You can use the [search page](https://app.khoj.dev/search) to upload documents that become indexed & searchable by your second brain. Click on "Add Documents" to get started.
- Use the desktop app to [upload and sync your documents](/clients/desktop). This is best if you have a lot of documents on your computer or you need the docs to stay in sync.
- Setup the sync options for either [Obsidian](/clients/obsidian) or [Emacs](/clients/emacs) to automatically sync your documents with Khoj. This is best if you are already using these tools and want to leverage Khoj's AI capabilities.
- Configure your [Notion](/data-sources/notion_integration) or [Github](/data-sources/github_integration) to sync with Khoj. By providing your credentials, you can keep the data synced in the background.
![demo of dragging and dropping a file](https://assets.khoj.dev/drag_drop_file.gif)
![demo of dragging and dropping a file](https://assets.khoj.dev/upload_pdf_doc.gif)

View File

@@ -6,7 +6,7 @@ sidebar_position: 4
You can use agents to setup custom system prompts with Khoj. The server host can setup their own agents, which are accessible to all users. You can see ours at https://app.khoj.dev/agents.
![Demo](https://assets.khoj.dev/agents_demo.gif)
![Demo](/img/agents_page_full.png)
## Creating an Agent (Self-Hosted)

View File

@@ -29,7 +29,7 @@ Khoj is available as a [Desktop app](/clients/desktop), [Emacs package](/clients
![](/img/khoj_clients.svg ':size=400px')
### Supported Data Sources
Khoj can understand your word, PDF, org-mode, markdown, plaintext files, [Github projects](/data-sources/github_integration) and [Notion pages](/data-sources/notion_integration).
Khoj can understand your word, PDF, org-mode, markdown, plaintext files, and [Notion pages](/data-sources/notion_integration).
![](/img/khoj_datasources.svg ':size=200px')

View File

@@ -1,9 +1,14 @@
# Automations
[Automations](https://app.khoj.dev/automations) are a powerful feature within Khoj to schedule repeated tasks for information retrieval directly from your account. You can run them at a specific time and interval. This is still an experimental feature, so please report any issues you encounter.
[Automations](https://app.khoj.dev/automations) are a powerful feature within Khoj to schedule repeated jobs for Khoj to do on your behalf. Automations allow you run your query automatically at a scheduled time and frequency. Khoj will send its research to you as an email.
You can ask Khoj to create custom newsletters, summarize news, notify you of world events etc.
Khoj will use your local time zone to determine the scheduling localization. You can go back and configure the prompt any time you want from the automations page. You can also delete the automation if you no longer need it.
Khoj uses your local time zone for scheduling. You can configure, share and delete your automations from the [automations page](https://app.khoj.dev/automations).
:::danger[Note]
Automations will not deliver emails to self-hosted users out of the box. You'll have to have Resend and [Authentication](/advanced/authentication) setup to send emails.
:::info[Self Hosting]
Self hosting users need to setup Resend and [Authentication](/advanced/authentication) setup to have Khoj send automation emails.
:::
| [Create Automation](https://app.khoj.dev/automations?subject=Weekly%20Newsletter&query=Compile%20a%20message%20including:%201.%20A%20recap%20of%20news%20from%20last%20week%202.%20An%20at-home%20workout%20I%20can%20do%20before%20work%203.%20A%20quote%20to%20inspire%20me%20for%20the%20week%20ahead&crontime=00%209%20*%20*%201) | Receive Automation Email |
|--------|-------|
| <img src="/img/khoj_create_automation.png" alt="create automation" width="330" height="500" /> | <img src="/img/khoj_automation_email.png" alt="create automation" width="330" height="660" /> |

View File

@@ -6,7 +6,7 @@ sidebar_position: 2
You can configure Khoj to chat with you about anything. When relevant, it'll use any notes or documents you shared with it to respond. It acts as an excellent research assistant, search engine, or personal tutor.
<img src="/img/khoj_chat_on_web.png" alt="Chat on Web" style={{width: '400px'}}/>
<img src="https://assets.khoj.dev/vision_chat_example.png" alt="Chat on Web" style={{width: '400px'}}/>
### Overview
- Creates a personal assistant for you to inquire and engage with your notes or online information as needed
@@ -15,50 +15,24 @@ You can configure Khoj to chat with you about anything. When relevant, it'll use
- Shows reference notes used to generate a response
### Setup (Self-Hosting)
#### Offline Chat
Offline chat stays completely private and can work without internet using open-source models.
> **System Requirements**:
> - Minimum 8 GB RAM. Recommend **16Gb VRAM**
> - Minimum **5 GB of Disk** available
> - A CPU supporting [AVX or AVX2 instructions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions) is required
> - An Nvidia, AMD GPU or a Mac M1+ machine would significantly speed up chat response times
1. Open your [Khoj offline settings](http://localhost:42110/server/admin/database/offlinechatprocessorconversationconfig/) and click *Enable* on the Offline Chat configuration.
2. Open your [Chat model options settings](http://localhost:42110/server/admin/database/chatmodeloptions/) and add any [GGUF chat model](https://huggingface.co/models?library=gguf) to use for offline chat. Make sure to use `Offline` as its type. For a balanced chat model that runs well on standard consumer hardware we recommend using [Llama 3.1 by Meta](https://huggingface.co/bartowski/Meta-Llama-3.1-8B-Instruct-GGUF) by default. For machines with no or small GPU we recommend using [Gemma 2 2B](https://huggingface.co/bartowski/gemma-2-2b-it-GGUF) or [Phi 3.5 mini](https://huggingface.co/bartowski/Phi-3.5-mini-instruct-GGUF)
:::tip[Note]
Offline chat is not supported for a multi-user scenario. The host machine will encounter segmentation faults if multiple users try to use offline chat at the same time.
:::
#### Online Chat
Online chat requires internet to use ChatGPT but is faster, higher quality and less compute intensive.
:::danger[Warning]
This will enable Khoj to send your chat queries and query relevant notes to OpenAI for processing.
:::
1. Get your [OpenAI API Key](https://platform.openai.com/account/api-keys)
2. Open your [Khoj Online Chat settings](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/). Add a new setting with your OpenAI API key, and click *Save*. Only one configuration will be used, so make sure that's the only one you have.
3. Open your [Chat model options](http://localhost:42110/server/admin/database/chatmodeloptions/) and add a new option for the OpenAI chat model you want to use. Make sure to use `OpenAI` as its type.
See [the setup guide](/get-started/setup.mdx) to configure your chat models.
### Use
1. Open Khoj Chat
- **On Web**: Open [/chat](https://app.khoj.dev/chat) in your web browser
- **On Obsidian**: Search for *Khoj: Chat* in the [Command Palette](https://help.obsidian.md/Plugins/Command+palette)
- **On Emacs**: Run `M-x khoj <user-query>`
2. Enter your queries to chat with Khoj. Use [slash commands](#commands) and [query filters](/miscellaneous/advanced#query-filters) to change what Khoj uses to respond
2. Enter your queries to chat with Khoj. Use [slash commands](#commands) and [query filters](/miscellaneous/query-filters) to change what Khoj uses to respond
#### Details
1. Your query is used to retrieve the most relevant notes, if any, using Khoj search
1. Your query is used to retrieve the most relevant notes, if any, using Khoj search using RAG.
2. These notes, the last few messages and associated metadata is passed to the enabled chat model along with your query to generate a response
#### Conversation File Filters
You can use conversation file filters to limit the notes used in the chat response. To do so, use the left panel in the web UI. Alternatively, you can also use [query filters](/miscellaneous/advanced#query-filters) to limit the notes used in the chat response.
You can use conversation file filters to limit the notes used in the chat response. To do so, use the left panel in the web UI. Alternatively, you can also use [query filters](/miscellaneous/query-filters) to limit the notes used in the chat response.
<img src="/img/select_file_filter.png" alt="Conversation File Filter" style={{width: '400px'}}/>
<img src="/img/file_filters_conversation.png" alt="Conversation File Filter" style={{width: '400px'}}/>
#### Commands
Slash commands allows you to change what Khoj uses to respond to your query
@@ -69,3 +43,6 @@ Slash commands allows you to change what Khoj uses to respond to your query
- **/image**: Generate an image in response to your query.
- **/help**: Use /help to get all available commands and general information about Khoj
- **/summarize**: Can be used to summarize 1 selected file filter for that conversation. Refer to [File Summarization](summarization) for details.
- **/diagram**: Generate a diagram in response to your query. This is built on [Excalidraw](https://excalidraw.com/).
- **/code**: Generate and run very simple Python code snippets. Refer to [Code Execution](code_execution) for details.
- **/research**: Go deeper in a topic for more accurate, in-depth responses.

View File

@@ -0,0 +1,40 @@
---
---
# Code Execution
Khoj can generate and run simple Python code as well. This is useful if you want to have Khoj do some data analysis, generate plots and reports. LLMs by default aren't skilled at complex quantitative tasks. Code generation & execution can come in handy for such tasks.
Khoj automatically infers when to use the code tool. You can also tell it explicitly to use the code tool or use the `/code` [slash command](https://docs.khoj.dev/features/chat/#commands) in your chat.
## Setup (Self-Hosting)
### Terrarium Sandbox
Use [Cohere's Terrarium](https://github.com/cohere-ai/cohere-terrarium) to host the code sandbox locally on your machine for free.
To run with Docker, use our [docker-compose.yml](https://github.com/khoj-ai/khoj/blob/master/docker-compose.yml) to automatically setup the Terrarium code sandbox, or start it manually like this:
```bash
docker pull ghcr.io/khoj-ai/terrarium:latest
docker run -d -p 8080:8080 ghcr.io/khoj-ai/terrarium:latest
```
To run from source, check [these instructions](https://github.com/khoj-ai/cohere-terrarium?tab=readme-ov-file#development).
#### Verify
Verify that it's running, by evaluating a simple Python expression:
```bash
curl -X POST -H "Content-Type: application/json" \
--url http://localhost:8080 \
--data-raw '{"code": "1 + 1"}' \
--no-buffer
```
### E2B Sandbox
[E2B](https://e2b.dev/) allows Khoj to run code on a remote but versatile sandbox with support for more python libraries. This is [not free](https://e2b.dev/pricing).
To have Khoj use E2B as the code sandbox:
1. Generate an API key on [their dashboard](https://e2b.dev/dashboard).
2. Set the `E2B_API_KEY` environment variable to it on the machine running your Khoj server.
- When using our [docker-compose.yml](https://github.com/khoj-ai/khoj/blob/master/docker-compose.yml), uncomment and set the `E2B_API_KEY` env var in the `docker-compose.yml` file.
3. Now restart your Khoj server to switch to using the E2B code sandbox.

View File

@@ -10,6 +10,17 @@ To generate images, you just need to provide a prompt to Khoj in which the image
## Setup (Self-Hosting)
Right now, we only support integration with OpenAI's DALL-E. You need to have an OpenAI API key to use this feature. Here's how you can set it up:
1. Setup your OpenAI API key. See instructions [here](/get-started/setup#2-configure)
2. Create a text to image config at http://localhost:42110/server/admin/database/texttoimagemodelconfig/. We recommend the value `dall-e-3`.
You have a couple of image generation options.
### Image Generation Models
We support most state of the art image generation models, including Ideogram, Flux, and Stable Diffusion. These will run using [Replicate](https://replicate.com). Here's how to set them up:
1. Get a Replicate API key [here](https://replicate.com/account/api-tokens).
2. Create a new [Text to Image Model](http://localhost:42110/server/admin/database/texttoimagemodelconfig/). Set the `type` to `Replicate`. Use any of the model names you see [on this list](https://replicate.com/pricing#image-models). We recommend the `model name` `black-forest-labs/flux-1.1-pro` from [Replicate](https://replicate.com/black-forest-labs/flux-1.1-pro).
### OpenAI
1. Get [an OpenAI API key](https://platform.openai.com/settings/organization/api-keys).
2. Setup your OpenAI API key, if you haven't already. See instructions [here](/get-started/setup#add-chat-models)
3. Create a text to image config at http://localhost:42110/server/admin/database/texttoimagemodelconfig/. Use `model name` `dall-e-3` to use openai for image generation. Make sure to set the `Ai model api` field to the OpenAI AI model api you setup in step 2.

View File

@@ -2,7 +2,7 @@
Oftentimes, having to leave your keyboard to move the mouse can break your flow. We want to make it as easy as possible for AI to flow with you as you are, so we've added some keyboard shortcuts to facilitate that.
## Web App
## Obsidian
### Up/Down Arrow Keys (Chat Input)

View File

@@ -14,8 +14,18 @@ Try it out yourself! https://app.khoj.dev
## Self-Hosting
Online search works out of the box even when self-hosting. Khoj uses [JinaAI's reader API](https://jina.ai/reader/) to search online and read webpages by default. No API key setup is necessary.
### Search
To improve online search, set the `SERPER_DEV_API_KEY` environment variable to your [Serper.dev](https://serper.dev/) API key. These search results include additional context like answer box, knowledge graph etc.
Online search can work even with self-hosting! You have a few options:
For advanced webpage reading, set the `OLOSTEP_API_KEY` environment variable to your [Olostep](https://www.olostep.com/) API key. This has a higher success rate at reading webpages than the default webpage reader.
- If you're using Docker, online search should work out of the box with [searxng](https://github.com/searxng/searxng) using our standard `docker-compose.yml`.
- For a non-local, free solution, you can use [JinaAI's reader API](https://jina.ai/reader/) to search online and read webpages. You can get a free API key via https://jina.ai/reader. Set the `JINA_API_KEY` environment variable to your Jina AI reader API key to enable online search.
- To get production-grade, fast online search, set the `SERPER_DEV_API_KEY` environment variable to your [Serper.dev](https://serper.dev/) API key. These search results include additional context like answer box, knowledge graph etc.
### Webpage Reading
Out of the box, you **don't have to do anything to enable webpage reading**. Khoj will automatically read webpages by using the `requests` library. To get more distributed and scalable webpage reading, you can use the following options:
- If you're using Jina AI's reader API for search, it should work automatically for webpage reading as well.
- For scalable webpage scraping, you can use [Firecrawl](https://www.firecrawl.dev/). Create a new [Webscraper](http://localhost:42110/server/admin/database/webscraper/add/). Set your Firecrawl API key to the Api Key field, and set the type to Firecrawl.
- For advanced webpage reading, you can use [Olostep](https://www.olostep.com/). This has a higher success rate at reading webpages than the default webpage readers. Create a new [Webscraper](http://localhost:42110/server/admin/database/webscraper/add/). Set your Olostep API key to the Api Key field, and set the type to Olostep.

View File

@@ -11,7 +11,41 @@ Take advantage of super fast search to find relevant notes and documents from yo
- **On Web**: Open https://app.khoj.dev/ in your web browser
- **On Obsidian**: Click the *Khoj search* icon 🔎 on the [Ribbon](https://help.obsidian.md/User+interface/Workspace/Ribbon) or Search for *Khoj: Search* in the [Command Palette](https://help.obsidian.md/Plugins/Command+palette)
- **On Emacs**: Run `M-x khoj <user-query>`
2. Query using natural language to find relevant entries from your knowledge base. Use [query filters](/miscellaneous/advanced#query-filters) to limit entries to search
2. Query using natural language to find relevant entries from your knowledge base. Use [query filters](/miscellaneous/query-filters) to limit entries to search
### Demo
![](/img/khoj_search_on_web.png ':size=400px')
![](/img/search_agents_markdown.png ':size=400px')
### Implementation Overview
A bi-encoder models is used to create meaning vectors (aka vector embeddings) of your documents and search queries.
1. When you sync you documents with Khoj, it uses the bi-encoder model to create and store meaning vectors of (chunks of) your documents
2. When you initiate a natural language search the bi-encoder model converts your query into a meaning vector and finds the most relevant document chunks for that query by comparing their meaning vectors.
3. The slower but higher-quality cross-encoder model is than used to re-rank these documents for your given query.
### Setup (Self-Hosting)
You are **not required** to configure the search model config when self-hosting. Khoj sets up decent default local search model config for general use.
You may want to configure this if you need better multi-lingual search, want to experiment with different, newer models or the default models do not work for your use-case.
You can use bi-encoder models downloaded locally [from Huggingface](https://huggingface.co/models?library=sentence-transformers), served via the [HuggingFace Inference API](https://endpoints.huggingface.co/), OpenAI API, Azure OpenAI API or any OpenAI Compatible API like Ollama, LiteLLM etc. Follow the steps below to configure your search model:
1. Open the [SearchModelConfig](http://localhost:42110/server/admin/database/searchmodelconfig/) page on your Khoj admin panel.
2. Hit the Plus button to add a new model config or click the id of an existing model config to edit it.
3. Set the `biencoder` field to the name of the bi-encoder model supported [locally](https://huggingface.co/models?library=sentence-transformers) or via the API you configure.
4. Set the `Embeddings inference endpoint api key` to your OpenAI API key and `Embeddings inference endpoint type` to `OpenAI` to use an OpenAI embedding model.
5. Also set the `Embeddings inference endpoint` to your Azure OpenAI or OpenAI compatible API URL to use the model via those APIs.
6. Ensure the search model config you want to use is the **only one** that has `name` field set to `default`[^1].
7. Save the search model configs and restart your Khoj server to start using your new, updated search config.
:::info
You will need to re-index all your documents if you want to use a different bi-encoder model.
:::
:::info
You may need to tune the `Bi encoder confidence threshold` field for each bi-encoder to get appropriate number of documents for chat with your Knowledge base.
Confidence here is a normalized measure of semantic distance between your query and documents. The confidence threshold limits the documents returned to chat that fall within the distance specified in this field. It can take values between 0.0 (exact overlap) and 1.0 (no meaning overlap).
:::
[^1]: Khoj uses the first search model config named `default` it finds on startup as the search model config for that session

View File

@@ -4,4 +4,4 @@ You can share any of your conversations by going to the three dot menu on the co
This means you can easily share a conversation with someone to show them how you solved a problem, or to get help with something you're working on.
![demo of sharing a conversation](https://assets.khoj.dev/shareable_conversations.gif)
![demo of sharing a conversation](https://assets.khoj.dev/share_side_panel_conversation.gif)

View File

@@ -2,26 +2,26 @@
You can talk to Khoj using your voice. Khoj will respond to your queries using the same models as the chat feature. You can use voice chat on the web, Desktop, and Obsidian apps.
![Voice Chat](/img/mic_chat_icon.png)
![Voice Chat](https://assets.khoj.dev/speech_to_text_demo.gif)
Click on the little mic icon to send your voice message to Khoj. It will send back what it heard via text. You'll have some time to edit it before sending it, if required. Try it at https://app.khoj.dev/.
Click on the little mic icon to send your voice message to Khoj. It will send back what it heard via text. You can edit the message before sending it, if required. Try it at https://app.khoj.dev/.
## Voice Response
If you send a voice message, Khoj will automatically respond back with a voice message.
You can also click on the speaker icon next to any message to hear it out loud. The voice response feature is available only on the web view right now.
![Speaker Icon](/img/speaker_icon.png)
![Speaker Icon](/img/text_to_speech.png)
## Setup (Self-Hosting)
Voice chat will automatically be configured when you initialize the application. The default configuration will run locally. If you want to use the OpenAI whisper API for voice chat, you can set it up by following these steps:
1. Setup your OpenAI API key. See instructions [here](/get-started/setup#2-configure).
1. Setup your OpenAI API key. See instructions [here](/get-started/setup#add-chat-models).
2. Create a new configuration at http://localhost:42110/server/admin/database/speechtotextmodeloptions/. We recommend the value `whisper-1` and model type `Openai`.
If you want to use the Text to Speech feature, you can set it up by following these steps:
1. Setup your account on [ElevenLabs.io](https://elevenlabs.io/).
2. Configure your API key in your environment variables with the key `ELEVEN_LABS_API_KEY`.
2. (Optional) Create a new [Voice model option](http://localhost:42110/server/admin/database/voicemodeloption/) with a specific voice ID from whichever voice you want to use. You can explore the options [here](https://elevenlabs.io/app/voice-library).
3. (Optional) Create a new [Voice model option](http://localhost:42110/server/admin/database/voicemodeloption/) with a specific voice ID from whichever voice you want to use. You can explore the options [here](https://elevenlabs.io/app/voice-library).

View File

@@ -1,13 +1,10 @@
---
sidebar_position: 0
slug: /
keywords: ["khoj", "khoj ai", "khoj docs", "khoj documentation", "khoj features", "khoj overview", "khoj quickstart", "khoj chat", "khoj search", "khoj cloud", "khoj self-host", "khoj setup", "open source ai", "local llm", "ai copilot", "second brain", "personal ai", "ai search engine"]
keywords: ["khoj", "khoj ai", "khoj docs", "khoj documentation", "khoj features", "khoj overview", "khoj quickstart", "khoj chat", "khoj search", "khoj cloud", "khoj self-host", "khoj setup", "open source ai", "local llm", "ai copilot", "second brain", "personal ai", "ai search engine", "research assistant", "open source llm", "self-host llm", "self-host chatgpt", "ai chatbot", "ai assistant", "ai research assistant", "ai search engine", "ai knowledge base", "ai knowledge graph", "ai personal assistant", "ai second brain", "open source ai assistant"]
---
# Overview
<p align="center"><img src="/img/khoj-logo-sideways-500.png" width="200" alt="Khoj Logo"></img></p>
<div align="center">
<b>Your Second Brain</b>
</div>
@@ -29,13 +26,12 @@ Welcome to the Khoj Docs! This is the best place to get setup and explore Khoj's
- Khoj is an open source, personal AI
- You can [chat](/features/chat) with it about anything. It'll use files you shared with it to respond, when relevant. It can also access information from the public internet.
- Quickly [find](/features/search) relevant notes and documents using natural language
- It understands pdf, plaintext, markdown, org-mode files, [notion pages](/data-sources/notion_integration) and [github repositories](/data-sources/github_integration)
- It understands pdf, plaintext, markdown, org-mode files, and [notion pages](/data-sources/notion_integration).
- Access it from your [Emacs](/clients/emacs), [Obsidian](/clients/obsidian), the [Khoj desktop app](/clients/desktop), or [any web browser](/clients/web)
- Use [cloud](https://app.khoj.dev/login) to access your Khoj anytime from anywhere, [self-host](/get-started/setup) on consumer hardware for privacy
- Use our [cloud](https://app.khoj.dev/login) instance to access your Khoj anytime from anywhere, [self-host](/get-started/setup) on consumer hardware for privacy
![demo_chat](https://assets.khoj.dev/quadratic_equation_khoj_web.gif)
## Quickstart
- [Try Khoj Cloud](https://app.khoj.dev) to get started quickly
- [Read these instructions](/get-started/setup) to self-host a private instance of Khoj
## At a Glance
![demo_chat](https://assets.khoj.dev/using_khoj_for_studying.gif)

View File

@@ -14,37 +14,108 @@ import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
```
## Setup
## Setup Khoj
These are the general setup instructions for self-hosted Khoj.
You can install the Khoj server using either Docker or Pip.
You can install the Khoj server using either [Docker](?server=docker) or [Pip](?server=pip).
:::info[Offline Model + GPU]
If you want to use the offline chat model and you have a GPU, you should use Installation Option 2 - local setup via the Python package directly. Our Docker image doesn't currently support running the offline chat model on GPU, making inference times really slow.
To use the offline chat model with your GPU, we recommend using the Docker setup with Ollama . You can also use the local Khoj setup via the Python package directly.
:::
### 1A. Install Method 1: Docker
:::info[First Run]
Restart your Khoj server after the first run to ensure all settings are applied correctly.
:::
#### Prerequisites
1. Install Docker Engine. See [official instructions](https://docs.docker.com/engine/install/).
2. Ensure you have Docker Compose. See [official instructions](https://docs.docker.com/compose/install/).
<Tabs groupId="server" queryString>
<TabItem value="docker" label="Docker">
<Tabs groupId="operating-systems" queryString="os">
<TabItem value="macos" label="MacOS">
<h3>Prerequisites</h3>
<h4>Docker</h4>
- *Option 1*: Click here to install [Docker Desktop](https://docs.docker.com/desktop/install/mac-install/). Make sure you also install the [Docker Compose](https://docs.docker.com/desktop/install/mac-install/) tool.
#### Setup
- *Option 2*: Use [Homebrew](https://brew.sh/) to install Docker and Docker Compose.
```shell
brew install --cask docker
brew install docker-compose
```
<h3>Setup</h3>
1. Download the Khoj docker-compose.yml file [from Github](https://github.com/khoj-ai/khoj/blob/master/docker-compose.yml)
```shell
mkdir ~/.khoj && cd ~/.khoj
wget https://raw.githubusercontent.com/khoj-ai/khoj/master/docker-compose.yml
```
2. Configure the environment variables in the `docker-compose.yml`
- Set `KHOJ_ADMIN_PASSWORD`, `KHOJ_DJANGO_SECRET_KEY` (and optionally the `KHOJ_ADMIN_EMAIL`) to something secure. This allows you to customize Khoj later via the admin panel.
- Set `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, or `GEMINI_API_KEY` to your API key if you want to use OpenAI, Anthropic or Gemini commercial chat models respectively.
- Uncomment `OPENAI_BASE_URL` to use [Ollama](/advanced/ollama?type=first-run&server=docker#setup) running on your host machine. Or set it to the URL of your OpenAI compatible API like vLLM or [LMStudio](/advanced/lmstudio).
3. Start Khoj by running the following command in the same directory as your docker-compose.yml file.
```shell
cd ~/.khoj
docker-compose up
```
</TabItem>
<TabItem value="windows" label="Windows">
<h3>Prerequisites</h3>
1. Install [WSL2](https://learn.microsoft.com/en-us/windows/wsl/install) and restart your machine
```shell
# Run in PowerShell
wsl --install
```
2. Install [Docker Desktop](https://docs.docker.com/desktop/install/windows-install/) with **[WSL2 backend](https://docs.docker.com/desktop/wsl/#turn-on-docker-desktop-wsl-2)** (default)
1. Get the sample docker-compose file [from Github](https://github.com/khoj-ai/khoj/blob/master/docker-compose.yml).
2. Configure the environment variables in the docker-compose.yml to your choosing.<br />
Note: *Your admin account will automatically be created based on the admin credentials in that file, so pay attention to those.*
3. Now start the container by running the following command in the same directory as your docker-compose.yml file. This will automatically setup the database and run the Khoj server.
```shell
docker-compose up
```
<h3>Setup</h3>
1. Download the Khoj docker-compose.yml file [from Github](https://github.com/khoj-ai/khoj/blob/master/docker-compose.yml)
```shell
# Windows users should use their WSL2 terminal to run these commands
mkdir ~/.khoj && cd ~/.khoj
wget https://raw.githubusercontent.com/khoj-ai/khoj/master/docker-compose.yml
```
2. Configure the environment variables in the `docker-compose.yml`
- Set `KHOJ_ADMIN_PASSWORD`, `KHOJ_DJANGO_SECRET_KEY` (and optionally the `KHOJ_ADMIN_EMAIL`) to something secure. This allows you to customize Khoj later via the admin panel.
- Set `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, or `GEMINI_API_KEY` to your API key if you want to use OpenAI, Anthropic or Gemini commercial chat models respectively.
- Uncomment `OPENAI_BASE_URL` to use [Ollama](/advanced/ollama) running on your host machine. Or set it to the URL of your OpenAI compatible API like vLLM or [LMStudio](/advanced/lmstudio).
3. Start Khoj by running the following command in the same directory as your docker-compose.yml file.
```shell
# Windows users should use their WSL2 terminal to run these commands
cd ~/.khoj
docker-compose up
```
</TabItem>
<TabItem value="linux" label="Linux">
<h3>Prerequisites</h3>
Install [Docker Desktop](https://docs.docker.com/desktop/install/linux/).
You can also use your package manager to install Docker Engine & Docker Compose.
Khoj should now be running at http://localhost:42110! You can see the web UI in your browser.
<h3>Setup</h3>
1. Download the Khoj docker-compose.yml file [from Github](https://github.com/khoj-ai/khoj/blob/master/docker-compose.yml)
```shell
mkdir ~/.khoj && cd ~/.khoj
wget https://raw.githubusercontent.com/khoj-ai/khoj/master/docker-compose.yml
```
2. Configure the environment variables in the `docker-compose.yml`
- Set `KHOJ_ADMIN_PASSWORD`, `KHOJ_DJANGO_SECRET_KEY` (and optionally the `KHOJ_ADMIN_EMAIL`) to something secure. This allows you to customize Khoj later via the admin panel.
- Set `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, or `GEMINI_API_KEY` to your API key if you want to use OpenAI, Anthropic or Gemini commercial chat models respectively.
- Uncomment `OPENAI_BASE_URL` to use [Ollama](/advanced/ollama) running on your host machine. Or set it to the URL of your OpenAI compatible API like vLLM or [LMStudio](/advanced/lmstudio).
3. Start Khoj by running the following command in the same directory as your docker-compose.yml file.
```shell
cd ~/.khoj
docker-compose up
```
</TabItem>
</Tabs>
### 1B. Install Method 2: Pip
#### Prerequisites
##### Install Postgres (with PgVector)
:::info[Remote Access]
By default Khoj is only accessible on the machine it is running. To access Khoj from a remote machine see [Remote Access Docs](/advanced/remote).
:::
Your setup is complete once you see `🌖 Khoj is ready to use` in the server logs on your terminal.
</TabItem>
<TabItem value="pip" label="Pip">
<h3>1. Install Postgres (with PgVector)</h3>
Khoj uses Postgres DB for all server configuration and to scale to multi-user setups. It uses the pgvector package in Postgres to manage your document embeddings. Both Postgres and pgvector need to be installed for Khoj to work.
@@ -66,198 +137,292 @@ Install [Postgres.app](https://postgresapp.com/). This comes pre-installed with
</TabItem>
</Tabs>
##### Create the Khoj database
<h3>2. Create the Khoj database</h3>
Make sure to update your environment variables to match your Postgres configuration if you're using a different name. The default values should work for most people. When prompted for a password, you can use the default password `postgres`, or configure it to your preference. Make sure to set the environment variable `POSTGRES_PASSWORD` to the same value as the password you set here.
<Tabs groupId="operating-systems" queryString="os">
<TabItem value="macos" label="MacOS">
<Tabs groupId="operating-systems" queryString="os">
<TabItem value="macos" label="MacOS">
```shell
createdb khoj -U postgres --password
createdb khoj -U postgres --password
```
</TabItem>
<TabItem value="windows" label="Windows">
</TabItem>
<TabItem value="windows" label="Windows">
```shell
createdb -U postgres khoj --password
```
</TabItem>
<TabItem value="linux" label="Linux">
```shell
sudo -u postgres createdb khoj --password
```
</TabItem>
</Tabs>
createdb -U postgres khoj --password
```
</TabItem>
<TabItem value="linux" label="Linux">
```shell
sudo -u postgres createdb khoj --password
```
</TabItem>
</Tabs>
#### Install Khoj server
:::info[Postgres Env Config]
Make sure to update the `POSTGRES_HOST`, `POSTGRES_PORT`, `POSTGRES_USER`, `POSTGRES_DB` or `POSTGRES_PASSWORD` environment variables to match any customizations in your Postgres configuration.
:::
##### Install Khoj Server
- *Make sure [python](https://realpython.com/installing-python/) and [pip](https://pip.pypa.io/en/stable/installation/) are installed on your machine*
- Check [llama-cpp-python setup](https://python.langchain.com/docs/integrations/llms/llamacpp#installation) if you hit any llama-cpp issues with the installation
<h3>3. Install Khoj Server</h3>
- Make sure [python](https://realpython.com/installing-python/) and [pip](https://pip.pypa.io/en/stable/installation/) are installed on your machine
- Check [llama-cpp-python setup](https://github.com/abetlen/llama-cpp-python?tab=readme-ov-file#supported-backends) if you hit any llama-cpp issues with the installation
Run the following command in your terminal to install the Khoj server.
<Tabs groupId="operating-systems" queryString="os">
<TabItem value="macos" label="MacOS">
<Tabs groupId="gpu" queryString="gpu">
<TabItem value="arm" label="ARM/M1+">
```shell
CMAKE_ARGS="-DGGML_METAL=on" python -m pip install khoj
```
</TabItem>
<TabItem value="intel" label="Intel">
```shell
# ARM/M1+ Machines
MAKE_ARGS="-DLLAMA_METAL=on" python -m pip install khoj
# Intel Machines
python -m pip install khoj
```
</TabItem>
</Tabs>
</TabItem>
<TabItem value="windows" label="Windows">
In PowerShell on Windows
```shell
# 1. (Optional) To use NVIDIA (CUDA) GPU
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
# 1. (Optional) To use AMD (ROCm) GPU
$env:CMAKE_ARGS = "-DLLAMA_HIPBLAS=on"
# 1. (Optional) To use VULCAN GPU
$env:CMAKE_ARGS = "-DLLAMA_VULKAN=on"
Run the following command in PowerShell on Windows
<Tabs groupId="gpu" queryString="gpu">
<TabItem value="cpu" label="CPU">
```shell
# Install Khoj
py -m pip install khoj
```
</TabItem>
<TabItem value="nvidia" label="NVIDIA (CUDA) GPU">
```shell
# 1. To use NVIDIA (CUDA) GPU
$env:CMAKE_ARGS = "-DGGML_CUDA=on"
# 2. Install Khoj
py -m pip install khoj
```
</TabItem>
<TabItem value="amd" label="AMD (ROCm) GPU">
```shell
# 1. To use AMD (ROCm) GPU
$env:CMAKE_ARGS = "-DGGML_HIPBLAS=on"
# 2. Install Khoj
py -m pip install khoj
```
</TabItem>
<TabItem value="vulkan" label="VULKAN GPU">
```shell
# 1. To use VULCAN GPU
$env:CMAKE_ARGS = "-DGGML_VULKAN=on"
# 2. Install Khoj
py -m pip install khoj
```
</TabItem>
</Tabs>
</TabItem>
<TabItem value="linux" label="Linux">
```shell
# CPU
python -m pip install khoj
# NVIDIA (CUDA) GPU
CMAKE_ARGS="DLLAMA_CUDA=on" FORCE_CMAKE=1 python -m pip install khoj
# AMD (ROCm) GPU
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" FORCE_CMAKE=1 python -m pip install khoj
# VULCAN GPU
CMAKE_ARGS="-DLLAMA_VULKAN=on" FORCE_CMAKE=1 python -m pip install khoj
```
<Tabs groupId="gpu" queryString="gpu">
<TabItem value="cpu" label="CPU">
```shell
python -m pip install khoj
```
</TabItem>
<TabItem value="nvidia" label="NVIDIA (CUDA) GPU">
```shell
CMAKE_ARGS="-DGGML_CUDA=on" FORCE_CMAKE=1 python -m pip install khoj
```
</TabItem>
<TabItem value="amd" label="AMD (ROCm) GPU">
```shell
CMAKE_ARGS="-DGGML_HIPBLAS=on" FORCE_CMAKE=1 python -m pip install khoj
```
</TabItem>
<TabItem value="vulkan" label="VULKAN GPU">
```shell
CMAKE_ARGS="-DGGML_VULKAN=on" FORCE_CMAKE=1 python -m pip install khoj
```
</TabItem>
</Tabs>
</TabItem>
</Tabs>
##### Start Khoj Server
<h3>4. Start Khoj Server</h3>
Before getting started, configure the following environment variables in your terminal for the first run
<Tabs groupId="operating-systems" queryString="os">
<TabItem value="macos" label="MacOS">
```shell
export KHOJ_ADMIN_EMAIL=<your-email>
export KHOJ_ADMIN_PASSWORD=<your-password>
```
</TabItem>
<TabItem value="windows" label="Windows">
If you're using PowerShell:
```shell
$env:KHOJ_ADMIN_EMAIL="<your-email>"
$env:KHOJ_ADMIN_PASSWORD="<your-password>"
```
</TabItem>
<TabItem value="linux" label="Linux">
```shell
export KHOJ_ADMIN_EMAIL=<your-email>
export KHOJ_ADMIN_PASSWORD=<your-password>
```
</TabItem>
</Tabs>
Run the following command from your terminal to start the Khoj backend and open Khoj in your browser.
Run the following command from your terminal to start the Khoj service.
```shell
khoj --anonymous-mode
```
`--anonymous-mode` allows you to run the server without setting up Google credentials for login. This allows you to use any of the clients without a login wall. If you want to use Google login, you can skip this flag, but you will have to add your Google developer credentials.
`--anonymous-mode` allows access to Khoj without requiring login. This is usually fine for local only, single user setups. If you need authentication follow the [authentication setup docs](/advanced/authentication).
On the first run, you will be prompted to input credentials for your admin account and do some basic configuration for your chat model settings. Once created, you can go to http://localhost:42110/server/admin and login with the credentials you just created.
Khoj should now be running at http://localhost:42110. You can see the web UI in your browser.
Note: To start Khoj automatically in the background use [Task scheduler](https://www.windowscentral.com/how-create-automated-task-using-task-scheduler-windows-10) on Windows or [Cron](https://en.wikipedia.org/wiki/Cron) on Mac, Linux (e.g. with `@reboot khoj`)
<h4>First Run</h4>
On the first run of the above command, you will be prompted to:
1. Create an admin account with an email and secure password
2. Customize the chat models to enable
- Keep your [OpenAI](https://platform.openai.com/api-keys), [Anthropic](https://console.anthropic.com/account/keys), [Gemini](https://aistudio.google.com/app/apikey) API keys and [OpenAI](https://platform.openai.com/docs/models), [Anthropic](https://docs.anthropic.com/en/docs/about-claude/models#model-names), [Gemini](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models#gemini-models), [Offline](https://huggingface.co/models?pipeline_tag=text-generation&library=gguf) chat model names handy to set any of them up during first run.
3. Your setup is complete once you see `🌖 Khoj is ready to use` in the server logs on your terminal!
### 2. Configure
#### Login to the Khoj Admin Panel
:::tip[Auto Start]
To start Khoj automatically in the background use [Task scheduler](https://www.windowscentral.com/how-create-automated-task-using-task-scheduler-windows-10) on Windows or [Cron](https://en.wikipedia.org/wiki/Cron) on Mac, Linux (e.g. with `@reboot khoj`)
:::
</TabItem>
</Tabs>
## Use Khoj
You can now open the web app at http://localhost:42110 and start interacting!<br />
Nothing else is necessary, but you can customize your setup further by following the steps below.
:::info[First Message to Offline Chat Model]
The offline chat model gets downloaded when you first send a message to it. The download can take a few minutes! Subsequent messages should be faster.
:::
### Add Chat Models
<h4>Login to the Khoj Admin Panel</h4>
Go to http://localhost:42110/server/admin and login with the admin credentials you setup during installation.
:::info[CSRF Error]
Ensure you are using **localhost, not 127.0.0.1**, to access the admin panel to avoid the CSRF error.
:::
:::info[DISALLOWED HOST Error]
:::info[CSRF Trusted Origin or Unset Cookie Error]
If using a load balancer/reverse_proxy in front of your Khoj server: Set the environment variable KHOJ_ALLOWED_DOMAIN=your-internal-ip-or-domain to avoid this error.
If unset, it defaults to KHOJ_DOMAIN.
:::
:::info[DISALLOWED HOST or Bad Request (400) Error]
You may hit this if you try access Khoj exposed on a custom domain (e.g. 192.168.12.3 or example.com) or over HTTP.
Set the environment variables KHOJ_DOMAIN=your-domain and KHOJ_NO_HTTPS=false if required to avoid this error.
Set the environment variables KHOJ_DOMAIN=your-external-ip-or-domain and KHOJ_NO_HTTPS=True if required to avoid this error.
:::
:::tip[Note]
Using Safari on Mac? You might not be able to login to the admin panel. Try using Chrome or Firefox instead.
:::
#### Configure Chat Model
<h4>Configure Chat Model</h4>
Setup which chat model you'd want to use. Khoj supports local and online chat models.
:::tip[Multiple Chat Models]
Add a `ServerChatSettings` with `Default` and `Summarizer` fields set to your preferred chat model via [the admin panel](http://localhost:42110/server/admin/database/serverchatsettings/add/). Otherwise Khoj defaults to use the first chat model in your [ChatModelOptions](http://localhost:42110/server/admin/database/chatmodeloptions/) for all non chat response generation tasks.
:::
##### Configure OpenAI Chat
<Tabs groupId="chatmodel" queryString>
<TabItem value="openai" label="OpenAI">
:::info[Ollama Integration]
Using Ollama? See the [Ollama Integration](/advanced/ollama) section for more custom setup instructions.
:::
1. Go to the [OpenAI settings](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/) in the server admin settings to add an OpenAI processor conversation config. This is where you set your API key and server API base URL. The API base URL is optional - it's only relevant if you're using another OpenAI-compatible proxy server.
2. Go over to configure your [chat model options](http://localhost:42110/server/admin/database/chatmodeloptions/). Set the `chat-model` field to a supported chat model[^1] of your choice. For example, you can specify `gpt-4o` if you're using OpenAI.
1. Create a new [AI Model Api](http://localhost:42110/server/admin/database/aimodelapi/add) in the server admin settings.
- Add your [OpenAI API key](https://platform.openai.com/api-keys)
- Give the configuration a friendly name like `OpenAI`
- (Optional) Set the API base URL. It is only relevant if you're using another OpenAI-compatible proxy server like [Ollama](/advanced/ollama) or [LMStudio](/advanced/lmstudio).<br />
![example configuration for ai model api](/img/example_openai_processor_config.png)
2. Create a new [chat model](http://localhost:42110/server/admin/database/chatmodel/add)
- Set the `chat-model` field to an [OpenAI chat model](https://platform.openai.com/docs/models). Example: `gpt-4o`.
- Make sure to set the `model-type` field to `OpenAI`.
- The `tokenizer` and `max-prompt-size` fields are optional. Set them only if you're sure of the tokenizer or token limit for the model you're using. Contact us if you're unsure what to do here.
- If your model supports vision, set the `vision enabled` field to `true`. This is currently only supported for OpenAI models with vision capabilities.
- The `tokenizer` and `max-prompt-size` fields are optional. Set them only if you're sure of the tokenizer or token limit for the model you're using. Contact us if you're unsure what to do here.<br />
![example configuration for chat model options](/img/example_chatmodel_option.png)
</TabItem>
<TabItem value="anthropic" label="Anthropic">
1. Create a new [AI Model API](http://localhost:42110/server/admin/database/aimodelapi/add) in the server admin settings.
- Add your [Anthropic API key](https://console.anthropic.com/account/keys)
- Give the configuration a friendly name like `Anthropic`. Do not configure the API base url.
2. Create a new [chat model](http://localhost:42110/server/admin/database/chatmodel/add)
- Set the `chat-model` field to an [Anthropic chat model](https://docs.anthropic.com/en/docs/about-claude/models#model-names). Example: `claude-3-5-sonnet-20240620`.
- Set the `model-type` field to `Anthropic`.
- Set the `ai model api` field to the Anthropic AI Model API you created in step 1.
</TabItem>
<TabItem value="gemini" label="Gemini">
1. Create a new [AI Model API](http://localhost:42110/server/admin/database/aimodelapi/add) in the server admin settings.
- Add your [Gemini API key](https://aistudio.google.com/app/apikey)
- Give the configuration a friendly name like `Gemini`. Do not configure the API base url.
2. Create a new [chat model](http://localhost:42110/server/admin/database/chatmodel/add)
- Set the `chat-model` field to a [Google Gemini chat model](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models#gemini-models). Example: `gemini-2.0-flash`.
- Set the `model-type` field to `Google`.
- Set the `ai model api` field to the Gemini AI Model API you created in step 1.
##### Configure Offline Chat
Any chat model on Huggingface in GGUF format can be used for local chat. Here's how you can set it up:
</TabItem>
<TabItem value="offline" label="Offline">
Offline chat stays completely private and can work without internet using any open-weights model.
1. No need to setup a conversation processor config!
2. Go over to configure your [chat model options](http://localhost:42110/server/admin/database/chatmodeloptions/). Set the `chat-model` field to a supported chat model[^1] of your choice. For example, we recommend `bartowski/Meta-Llama-3.1-8B-Instruct-GGUF`, but [any gguf model on huggingface](https://huggingface.co/models?library=gguf) should work.
- Make sure to set the `model-type` to `Offline`. Do not set `openai config`.
- The `tokenizer` and `max-prompt-size` fields are optional. You can set these for non-standard models (i.e not Mistral or Llama based models) or when you know the token limit of the model to improve context stuffing.
:::tip[System Requirements]
- Minimum 8 GB RAM. Recommend **16Gb VRAM**
- Minimum **5 GB of Disk** available
- A Nvidia, AMD GPU or a Mac M1+ machine would significantly speed up chat responses
:::
#### Share your data
You can sync your files and folders with Khoj using the [Desktop](/clients/desktop#setup), [Obsidian](/clients/obsidian#setup), or [Emacs](/clients/emacs#setup) clients or just drag and drop specific files on the [website](/clients/web#upload-documents). You can also directly sync your [Notion workspace](/data-sources/notion_integration).
1. Get the name of your preferred chat model from [HuggingFace](https://huggingface.co/models?pipeline_tag=text-generation&library=gguf). *Most GGUF format chat models are supported*.
2. Open the [create chat model page](http://localhost:42110/server/admin/database/chatmodel/add/) on the admin panel
3. Set the `chat-model` field to the name of your preferred chat model
- Make sure the `model-type` is set to `Offline`
4. Set the newly added chat model as your preferred model in your [User chat settings](http://localhost:42110/settings) and [Server chat settings](http://localhost:42110/server/admin/database/serverchatsettings/).
5. Restart the Khoj server and [start chatting](http://localhost:42110) with your new offline model!
</TabItem>
</Tabs>
[^1]: Khoj, by default, can use [OpenAI GPT3.5+ chat models](https://platform.openai.com/docs/models/overview) or [GGUF chat models](https://huggingface.co/models?library=gguf). See [this section](/advanced/use-openai-proxy) on how to locally use OpenAI-format compatible proxy servers.
:::tip[Multiple Chat Models]
Set your preferred default chat model in the `Default`, `Advanced` fields of your [ServerChatSettings](http://localhost:42110/server/admin/database/serverchatsettings/).
Khoj uses these chat model for all intermediate steps like intent detection, web search etc.
:::
### 3. Use Khoj 🚀
:::info[Chat Model Fields]
- The `tokenizer` and `max-prompt-size` fields are optional. Set them only if you're sure of the tokenizer or token limit for the model you're using. This improves context stuffing. Contact us if you're unsure what to do here.
- Only tick the `vision enabled` field for OpenAI models with vision capabilities like gpt-4o. Vision capabilities in other chat models is not currently utilized.
:::
Now open http://localhost:42110 to start interacting with Khoj!
### Sync your Knowledge
### 4. Install Khoj Clients (Optional)
Khoj exposes a web interface to search, chat and configure by default.<br />
You can install a Khoj client to sync your documents or to easily access Khoj from within Obsidian, Emacs or your OS.
- You can chat with your notes and documents using Khoj.
- Khoj can keep your files and folders synced using the Khoj [Desktop](/clients/desktop#setup), [Obsidian](/clients/obsidian#setup) or [Emacs](/clients/emacs#setup) clients.
- Your [Notion workspace](/data-sources/notion_integration) can be directly synced from the web app.
- You can also just drag and drop specific files you want to chat with on the [Web app](/clients/web#upload-documents).
- **Khoj Desktop**:<br />
[Install](/clients/desktop#setup) the Khoj Desktop app.
### Setup Khoj Clients
The Khoj web app is available by default to chat, search and configure Khoj.<br />
You can also install a Khoj client to easily access it from Obsidian, Emacs, Whatsapp or your OS and keep your documents synced with Khoj.
- **Khoj Obsidian**:<br />
[Install](/clients/obsidian#setup) the Khoj Obsidian plugin.
- **Khoj Emacs**:<br />
[Install](/clients/emacs#setup) khoj.el
#### Setup host URL
:::info[Note]
Set the host URL on your clients settings page to your Khoj server URL. By default, use `http://127.0.0.1:42110` or `http://localhost:42110`. Note that `localhost` may not work in all cases.
:::
<Tabs groupId="client" queryString>
<TabItem value="desktop" label="Desktop">
- Read the Khoj Desktop app [setup docs](/clients/desktop#setup).
</TabItem>
<TabItem value="emacs" label="Emacs">
- Read the Khoj Emacs package [setup docs](/clients/emacs#setup).
</TabItem>
<TabItem value="obsidian" label="Obsidian">
- Read the Khoj Obsidian plugin [setup docs](/clients/obsidian#setup).
</TabItem>
<TabItem value="whatsapp" label="Whatsapp">
- Read the Khoj Whatsapp app [setup docs](/clients/whatsapp).
</TabItem>
</Tabs>
## Upgrade
<Tabs groupId="environment">
<TabItem value="localsetup" label="Local Setup">
### Upgrade Server
<Tabs groupId="server" queryString>
<TabItem value="pip" label="Pip">
```shell
pip install --upgrade khoj
```
*Note: To upgrade to the latest pre-release version of the khoj server run below command*
</TabItem>
<TabItem value="docker" label="Docker">
From the same directory where you have your `docker-compose` file, this will fetch the latest build and upgrade your server.
Run the commands below from the same directory where you have your `docker-compose.yml` file.
This will fetch the latest build and upgrade your server.
```shell
# Windows users should use their WSL2 terminal to run these commands
cd ~/.khoj # assuming your khoj docker-compose.yml file is here
docker-compose up --build
```
</TabItem>
</Tabs>
### Upgrade Clients
<Tabs groupId="client" queryString>
<TabItem value="desktop" label="Desktop">
- The Desktop app automatically updates to the latest released version on restart.
- You can manually download the latest version from the [Khoj Website](https://khoj.dev/downloads).
</TabItem>
<TabItem value="emacs" label="Emacs">
- Use your Emacs Package Manager to Upgrade
- See [khoj.el package setup](/clients/emacs#setup) for details
@@ -269,9 +434,9 @@ Set the host URL on your clients settings page to your Khoj server URL. By defau
</Tabs>
## Uninstall
<Tabs groupId="environment">
<TabItem value="localsetup" label="Local Setup">
### Uninstall Server
<Tabs groupId="server" queryString>
<TabItem value="pip" label="Pip">
```shell
# uninstall khoj server
pip uninstall khoj
@@ -281,19 +446,25 @@ Set the host URL on your clients settings page to your Khoj server URL. By defau
```
</TabItem>
<TabItem value="docker" label="Docker">
From the same directory where you have your `docker-compose` file, run the command below to remove the server to delete its containers, networks, images and volumes.
Run the command below from the same directory where you have your `docker-compose` file.
This will remove the server containers, networks, images and volumes.
```shell
docker-compose down --volumes
```
</TabItem>
</Tabs>
### Uninstall Clients
<Tabs groupId="client" queryString>
<TabItem value="desktop" label="Desktop">
Uninstall the Khoj Desktop client in the standard way from your OS.
</TabItem>
<TabItem value="emacs" label="Emacs">
Uninstall the khoj Emacs, or desktop client in the standard way from Emacs or your OS respectively
You can also `rm -rf ~/.khoj` to remove the Khoj data directory if did a local install.
Uninstall the Khoj Emacs package in the standard way from Emacs.
</TabItem>
<TabItem value="obsidian" label="Obsidian">
Uninstall the khoj Obisidan, or desktop client in the standard way from Obsidian or your OS respectively
You can also `rm -rf ~/.khoj` to remove the Khoj data directory if did a local install.
Uninstall via the Community plugins tab on the settings pane in the Obsidian app
</TabItem>
</Tabs>
@@ -310,7 +481,6 @@ Set the host URL on your clients settings page to your Khoj server URL. By defau
```
3. Now start `khoj` using the standard steps described earlier
#### Install fails while building Tokenizer dependency
- **Details**: `pip install khoj` fails while building the `tokenizers` dependency. Complains about Rust.
- **Fix**: Install Rust to build the tokenizers package. For example on Mac run:
@@ -321,18 +491,6 @@ Set the host URL on your clients settings page to your Khoj server URL. By defau
```
- **Refer**: [Issue with Fix](https://github.com/khoj-ai/khoj/issues/82#issuecomment-1241890946) for more details
#### Khoj in Docker errors out with \"Killed\" in error message
- **Fix**: Increase RAM available to Docker Containers in Docker Settings
- **Refer**: [StackOverflow Solution](https://stackoverflow.com/a/50770267), [Configure Resources on Docker for Mac](https://docs.docker.com/desktop/mac/#resources)
## Advanced
### Self Host on Custom Domain
You can self-host Khoj behind a custom domain as well. To do so, you need to set the `KHOJ_DOMAIN` environment variable to your domain (e.g., `export KHOJ_DOMAIN=my-khoj-domain.com` or add it to your `docker-compose.yml`). By default, the Khoj server you set up will not be accessible outside of `localhost` or `127.0.0.1`.
:::warning[Without HTTPS certificate]
To expose Khoj on a custom domain over the public internet, use of an SSL certificate is strongly recommended. You can use [Let's Encrypt](https://letsencrypt.org/) to get a free SSL certificate for your domain.
To disable HTTPS, set the `KHOJ_NO_HTTPS` environment variable to `True`. This can be useful if Khoj is only accessible behind a secure, private network.
:::

View File

@@ -6,20 +6,22 @@ sidebar_position: 2
Here are some top-level performance metrics for Khoj. These are rough estimates and will vary based on your hardware and data.
:::info
These performance metrics were last evaluated in 2022.
:::
### Search performance
- Semantic search using the bi-encoder is fairly fast at \<100 ms across all content types
- Reranking using the cross-encoder is slower at \<2s on 15 results. Tweak `top_k` to tradeoff speed for accuracy of results
- Semantic search using the default embeddings model is fairly fast at \<100 ms across all content types
- Reranking using the cross-encoder model is slower at \<2s on 15 results. Tweak `top_k` to tradeoff speed for accuracy of results
- Filters in query (e.g. by file, word or date) usually add \<20ms to query latency
### Indexing performance
- Indexing is more strongly impacted by the size of the source data
- Indexing 100K+ line corpus of notes takes about 10 minutes
- Indexing 4000+ images takes about 15 minutes and more than 8Gb of RAM
- Note: *It should only take this long on the first run* as the index is incrementally updated
### Miscellaneous
- Testing done on a Mac M1 and a \>100K line corpus of notes
- Search, indexing on a GPU has not been tested yet

View File

@@ -2,9 +2,7 @@
sidebar_position: 3
---
# Advanced Usage
## Query Filters
# Query Filters
Use structured query syntax to filter entries from your knowledge based used by search results or chat responses.

View File

@@ -14,13 +14,6 @@ We don't send any personal information or any information from/about your conten
## Disable Telemetry
If you're self-hosting Khoj, you can opt out of telemetry at any time. To do so,
1. Open `~/.khoj/khoj.yml`
2. Add the following configuration:
```
app:
should-log-telemetry: false
```
3. Save the file and restart Khoj
If you're self-hosting Khoj, you can opt out of telemetry at any time by setting the `KHOJ_TELEMETRY_DISABLE` environment variable to `True` via shell or `docker-compose.yml`
If you have any questions or concerns, please reach out to us on [Discord](https://discord.gg/BDgyabRM6e).

View File

@@ -37,6 +37,9 @@ const config = {
locales: ['en'],
},
// Add a widget for Chatwoot for live chat if users need help
clientModules: [require.resolve('./src/components/ChatwootWidget.js')],
presets: [
[
'classic',
@@ -69,19 +72,17 @@ const config = {
}),
],
],
themeConfig:
/** @type {import('@docusaurus/preset-classic').ThemeConfig} */
({
image: 'img/khoj-logo-sideways-500.png',
image: 'img/khoj_documentation.png',
metadata: [
{name: 'og:title', content: 'Khoj Documentation'},
{name: 'og:title', content: 'Docs'},
{name: 'og:type', content: 'website'},
{name: 'og:site_name', content: 'Khoj Documentation'},
{name: 'og:description', content: 'Quickly get started with using or self-hosting Khoj'},
{name: 'og:image', content: 'https://assets.khoj.dev/link_preview_docs.png'},
{name: 'og:url', content: 'https://docs.khoj.dev'},
{name: 'keywords', content: 'khoj, khoj ai, chatgpt, open ai, open source, productivity'}
{name: 'keywords', content: 'khoj, khoj ai, chatgpt, open source ai, open source, transparent, accessible, trustworthy, hackable, index notes, rag, productivity'}
],
navbar: {
title: 'Khoj',
@@ -92,23 +93,31 @@ const config = {
items: [
{
href: 'https://github.com/khoj-ai/khoj',
label: 'GitHub',
position: 'right',
className: 'header-github-link',
title: 'Codebase',
'aria-label': 'GitHub repository',
},
{
href: 'https://app.khoj.dev/login',
label: 'Cloud',
position: 'right',
className: 'header-cloud-link',
title: 'Khoj Cloud',
'aria-label': 'Khoj Cloud',
},
{
href: 'https://discord.gg/BDgyabRM6e',
label: 'Discord',
position: 'right',
className: 'header-discord-link',
title: 'Community',
'aria-label': 'Discord community',
},
{
href: 'https://blog.khoj.dev',
label: 'Blog',
position: 'right',
className: 'header-blog-link',
title: 'Blog',
'aria-label': 'Khoj Blog',
},
],
},

View File

@@ -16,7 +16,7 @@
"dependencies": {
"@docusaurus/core": "^3.2.1",
"@docusaurus/plugin-sitemap": "^3.2.1",
"@docusaurus/preset-classic": "^3.2.1",
"@docusaurus/preset-classic": "^3.7.0",
"@mdx-js/react": "^3.0.0",
"clsx": "^2.0.0",
"prism-react-renderer": "^2.3.0",

View File

@@ -0,0 +1,19 @@
import ExecutionEnvironment from '@docusaurus/ExecutionEnvironment';
// Only execute on client-side
if (ExecutionEnvironment.canUseDOM) {
(function (d, t) {
var BASE_URL = "https://app.chatwoot.com";
var g = d.createElement(t), s = d.getElementsByTagName(t)[0];
g.src = BASE_URL + "/packs/js/sdk.js";
g.defer = true;
g.async = true;
s.parentNode.insertBefore(g, s);
g.onload = function () {
window.chatwootSDK.run({
websiteToken: 'cFxvnLSjfE2UF4UUiPCA5NsF',
baseUrl: BASE_URL
})
}
})(document, 'script');
}

View File

@@ -8,13 +8,13 @@
/* You can override the default Infima variables here. */
:root {
--ifm-color-primary: #fcc50b;
--ifm-color-primary-dark: #fcc50b;
--ifm-color-primary-darker: #fcc50b;
--ifm-color-primary-darkest: #fcc50b;
--ifm-color-primary-light: #fcc50b;
--ifm-color-primary-lighter: #fcc50b;
--ifm-color-primary-lightest: #fcc50b;
--ifm-color-primary: #FFA07A;
--ifm-color-primary-dark: #FFA07A;
--ifm-color-primary-darker: #FFA07A;
--ifm-color-primary-darkest: #FFA07A;
--ifm-color-primary-light: #FFA07A;
--ifm-color-primary-lighter: #FFA07A;
--ifm-color-primary-lightest: #FFA07A;
--ifm-code-font-size: 95%;
--ifm-heading-font-family: 'Source Sans 3', sans-serif;
--docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.1);
@@ -22,16 +22,64 @@
/* For readability concerns, you should choose a lighter palette in dark mode. */
[data-theme='dark'] {
--ifm-color-primary: #fcc50b;
--ifm-color-primary-dark: #fcc50b;
--ifm-color-primary-darker: #fcc50b;
--ifm-color-primary-darkest: #fcc50b;
--ifm-color-primary-light: #fcc50b;
--ifm-color-primary-lighter: #fcc50b;
--ifm-color-primary-lightest: #fcc50b;
--ifm-color-primary: #FFA07A;
--ifm-color-primary-dark: #FFA07A;
--ifm-color-primary-darker: #FFA07A;
--ifm-color-primary-darkest: #FFA07A;
--ifm-color-primary-light: #FFA07A;
--ifm-color-primary-lighter: #FFA07A;
--ifm-color-primary-lightest: #FFA07A;
--docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.3);
}
body {
font-family: 'Source Sans 3', sans-serif;
}
.header-github-link::before {
content: '';
width: 24px;
height: 24px;
display: flex;
background-color: var(--ifm-navbar-link-color);
mask-image: url("data:image/svg+xml,%3Csvg viewBox='0 0 24 24' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath d='M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12'/%3E%3C/svg%3E");
transition: background-color var(--ifm-transition-fast) var(--ifm-transition-timing-default);
}
.header-blog-link::before {
content: '';
width: 24px;
height: 24px;
display: flex;
background-color: var(--ifm-navbar-link-color);
mask-image: url("data:image/svg+xml,%3Csvg viewBox='0 0 512 512' xmlns='http://www.w3.org/2000/svg'%3E%3C!--!Font Awesome Free 6.7.2 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free Copyright 2024 Fonticons, Inc.--%3E%3Cpath d='M278.5 215.6L23 471c-9.4 9.4-9.4 24.6 0 33.9s24.6 9.4 33.9 0l57-57 68 0c49.7 0 97.9-14.4 139-41c11.1-7.2 5.5-23-7.8-23c-5.1 0-9.2-4.1-9.2-9.2c0-4.1 2.7-7.6 6.5-8.8l81-24.3c2.5-.8 4.8-2.1 6.7-4l22.4-22.4c10.1-10.1 2.9-27.3-11.3-27.3l-32.2 0c-5.1 0-9.2-4.1-9.2-9.2c0-4.1 2.7-7.6 6.5-8.8l112-33.6c4-1.2 7.4-3.9 9.3-7.7C506.4 207.6 512 184.1 512 160c0-41-16.3-80.3-45.3-109.3l-5.5-5.5C432.3 16.3 393 0 352 0s-80.3 16.3-109.3 45.3L139 149C91 197 64 262.1 64 330l0 55.3L253.6 195.8c6.2-6.2 16.4-6.2 22.6 0c5.4 5.4 6.1 13.6 2.2 19.8z'/%3E%3C/svg%3E");
transition: background-color var(--ifm-transition-fast) var(--ifm-transition-timing-default);
}
.header-cloud-link::before {
content: '';
width: 24px;
height: 24px;
display: flex;
background-color: var(--ifm-navbar-link-color);
mask-image: url("data:image/svg+xml,%3Csvg viewBox='0 0 512 512' xmlns='http://www.w3.org/2000/svg'%3E%3C!--!Font Awesome Free 6.7.2 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free Copyright 2024 Fonticons, Inc.--%3E%3Cpath d='M0 224c0 53 43 96 96 96l47.2 0L290 202.5c17.6-14.1 42.6-14 60.2 .2s22.8 38.6 12.8 58.8L333.7 320l18.3 0 64 0c53 0 96-43 96-96s-43-96-96-96c-.5 0-1.1 0-1.6 0c1.1-5.2 1.6-10.5 1.6-16c0-44.2-35.8-80-80-80c-24.3 0-46.1 10.9-60.8 28C256.5 24.3 219.1 0 176 0C114.1 0 64 50.1 64 112c0 7.1 .7 14.1 1.9 20.8C27.6 145.4 0 181.5 0 224zm330.1 3.6c-5.8-4.7-14.2-4.7-20.1-.1l-160 128c-5.3 4.2-7.4 11.4-5.1 17.8s8.3 10.7 15.1 10.7l70.1 0L177.7 488.8c-3.4 6.7-1.6 14.9 4.3 19.6s14.2 4.7 20.1 .1l160-128c5.3-4.2 7.4-11.4 5.1-17.8s-8.3-10.7-15.1-10.7l-70.1 0 52.4-104.8c3.4-6.7 1.6-14.9-4.2-19.6z'/%3E%3C/svg%3E");
transition: background-color var(--ifm-transition-fast) var(--ifm-transition-timing-default);
}
.header-discord-link::before {
content: '';
width: 30px;
height: 24px;
display: flex;
background-color: var(--ifm-navbar-link-color);
mask-image: url("data:image/svg+xml,%3Csvg viewBox='0 0 640 512' xmlns='http://www.w3.org/2000/svg'%3E%3C!--!Font Awesome Free 6.7.2 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free Copyright 2024 Fonticons, Inc.--%3E%3Cpath d='M524.5 69.8a1.5 1.5 0 0 0 -.8-.7A485.1 485.1 0 0 0 404.1 32a1.8 1.8 0 0 0 -1.9 .9 337.5 337.5 0 0 0 -14.9 30.6 447.8 447.8 0 0 0 -134.4 0 309.5 309.5 0 0 0 -15.1-30.6 1.9 1.9 0 0 0 -1.9-.9A483.7 483.7 0 0 0 116.1 69.1a1.7 1.7 0 0 0 -.8 .7C39.1 183.7 18.2 294.7 28.4 404.4a2 2 0 0 0 .8 1.4A487.7 487.7 0 0 0 176 479.9a1.9 1.9 0 0 0 2.1-.7A348.2 348.2 0 0 0 208.1 430.4a1.9 1.9 0 0 0 -1-2.6 321.2 321.2 0 0 1 -45.9-21.9 1.9 1.9 0 0 1 -.2-3.1c3.1-2.3 6.2-4.7 9.1-7.1a1.8 1.8 0 0 1 1.9-.3c96.2 43.9 200.4 43.9 295.5 0a1.8 1.8 0 0 1 1.9 .2c2.9 2.4 6 4.9 9.1 7.2a1.9 1.9 0 0 1 -.2 3.1 301.4 301.4 0 0 1 -45.9 21.8 1.9 1.9 0 0 0 -1 2.6 391.1 391.1 0 0 0 30 48.8 1.9 1.9 0 0 0 2.1 .7A486 486 0 0 0 610.7 405.7a1.9 1.9 0 0 0 .8-1.4C623.7 277.6 590.9 167.5 524.5 69.8zM222.5 337.6c-29 0-52.8-26.6-52.8-59.2S193.1 219.1 222.5 219.1c29.7 0 53.3 26.8 52.8 59.2C275.3 311 251.9 337.6 222.5 337.6zm195.4 0c-29 0-52.8-26.6-52.8-59.2S388.4 219.1 417.9 219.1c29.7 0 53.3 26.8 52.8 59.2C470.7 311 447.5 337.6 417.9 337.6z'/%3E%3C/svg%3E");
transition: background-color var(--ifm-transition-fast) var(--ifm-transition-timing-default);
}
.header-github-link:hover::before,
.header-cloud-link:hover::before,
.header-discord-link:hover::before,
.header-blog-link:hover::before {
background-color: var(--ifm-navbar-link-hover-color);
transform: scale(1.1);
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,7 @@
{
"id": "khoj",
"name": "Khoj",
"version": "1.22.2",
"version": "1.38.0",
"minAppVersion": "0.15.0",
"description": "Your Second Brain",
"author": "Khoj Inc.",

View File

@@ -1,40 +1,64 @@
FROM ubuntu:jammy
# syntax=docker/dockerfile:1
FROM ubuntu:jammy AS base
LABEL homepage="https://khoj.dev"
LABEL repository="https://github.com/khoj-ai/khoj"
LABEL org.opencontainers.image.source="https://github.com/khoj-ai/khoj"
LABEL org.opencontainers.image.description="Your second brain, containerized for multi-user, cloud deployment"
# Install System Dependencies
RUN apt update -y && apt -y install python3-pip libsqlite3-0 ffmpeg libsm6 libxext6 swig curl
# Install Node.js and Yarn
RUN curl -sL https://deb.nodesource.com/setup_20.x | bash -
RUN apt -y install nodejs
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt update && apt -y install yarn
RUN apt update -y && apt -y install \
python3-pip \
libsqlite3-0 \
ffmpeg \
libsm6 \
libxext6 \
swig \
curl \
# Required by llama-cpp-python pre-built wheels. See #1628
musl-dev && \
ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1 && \
# Clean up
apt clean && rm -rf /var/lib/apt/lists/*
# Build Server
FROM base AS server-deps
WORKDIR /app
# Install Application
COPY pyproject.toml .
COPY README.md .
ARG VERSION=0.0.0
# use the pre-built llama-cpp-python, torch cpu wheel
ENV PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cpu https://abetlen.github.io/llama-cpp-python/whl/cpu"
# avoid downloading unused cuda specific python packages
ENV CUDA_VISIBLE_DEVICES=""
RUN sed -i "s/dynamic = \\[\"version\"\\]/version = \"$VERSION\"/" pyproject.toml && \
TMPDIR=/home/cache/ pip install --cache-dir=/home/cache/ -e .[prod]
pip install --no-cache-dir -e .[prod]
# Copy Source Code
COPY . .
# Set the PYTHONPATH environment variable in order for it to find the Django app.
ENV PYTHONPATH=/app/src:$PYTHONPATH
# Go to the directory src/interface/web and export the built Next.js assets
# Build Web App
FROM node:20-alpine AS web-app
# Set build optimization env vars
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
WORKDIR /app/src/interface/web
RUN bash -c "yarn cache clean && yarn install --verbose && yarn ciexport"
# Install dependencies first (cache layer)
COPY src/interface/web/package.json src/interface/web/yarn.lock ./
RUN yarn install --frozen-lockfile
# Copy source and build
COPY src/interface/web/. ./
RUN yarn build
# Merge the Server and Web App into a Single Image
FROM base
ENV PYTHONPATH=/app/src:$PYTHONPATH
WORKDIR /app
COPY --from=server-deps /usr/local/lib/python3.10/dist-packages /usr/local/lib/python3.10/dist-packages
COPY --from=server-deps /usr/local/bin /usr/local/bin
COPY --from=web-app /app/src/interface/web/out ./src/khoj/interface/built
COPY . .
RUN cd src && python3 khoj/manage.py collectstatic --noinput
# Run the Application
# There are more arguments required for the application to run,
# but these should be passed in through the docker-compose.yml file.
# but those should be passed in through the docker-compose.yml file.
ARG PORT
EXPOSE ${PORT}
ENTRYPOINT [ "gunicorn", "-c", "gunicorn-config.py", "src.khoj.main:app" ]
ENTRYPOINT ["gunicorn", "-c", "gunicorn-config.py", "src.khoj.main:app"]

View File

@@ -41,7 +41,7 @@ dependencies = [
"defusedxml == 0.7.1",
"fastapi >= 0.110.0",
"python-multipart >= 0.0.7",
"jinja2 == 3.1.4",
"jinja2 == 3.1.6",
"openai >= 1.0.0",
"tiktoken >= 0.3.2",
"tenacity >= 8.2.2",
@@ -51,43 +51,49 @@ dependencies = [
"pyyaml ~= 6.0",
"rich >= 13.3.1",
"schedule == 1.1.0",
"sentence-transformers == 3.0.1",
"sentence-transformers == 3.4.1",
"einops == 0.8.0",
"transformers >= 4.28.0",
"torch == 2.2.2",
"transformers >= 4.28.0, < 4.50.0",
"torch == 2.6.0",
"uvicorn == 0.30.6",
"aiohttp ~= 3.9.0",
"langchain == 0.2.5",
"langchain-openai == 0.1.7",
"langchain-community == 0.2.5",
"requests >= 2.26.0",
"tenacity == 8.3.0",
"anyio == 3.7.1",
"pymupdf >= 1.23.5",
"django == 5.0.8",
"anyio ~= 4.8.0",
"pymupdf == 1.24.11",
"django == 5.0.13",
"django-unfold == 0.42.0",
"authlib == 1.2.1",
"llama-cpp-python == 0.2.88",
"itsdangerous == 2.1.2",
"httpx == 0.25.0",
"httpx == 0.28.1",
"pgvector == 0.2.4",
"psycopg2-binary == 2.9.9",
"lxml == 4.9.3",
"tzdata == 2023.3",
"rapidocr-onnxruntime == 1.3.22",
"rapidocr-onnxruntime == 1.3.24",
"openai-whisper >= 20231117",
"django-phonenumber-field == 7.3.0",
"phonenumbers == 8.13.27",
"markdownify ~= 0.11.6",
"markdown-it-py ~= 3.0.0",
"websockets == 12.0",
"websockets == 13.0",
"psutil >= 5.8.0",
"huggingface-hub >= 0.22.2",
"apscheduler ~= 3.10.0",
"pytz ~= 2024.1",
"cron-descriptor == 1.4.3",
"django_apscheduler == 0.6.2",
"anthropic == 0.26.1",
"docx2txt == 0.8"
"anthropic == 0.49.0",
"docx2txt == 0.8",
"google-genai == 1.5.0",
"google-auth ~= 2.23.3",
"pyjson5 == 1.6.7",
"resend == 1.0.1",
"email-validator == 2.2.0",
"e2b-code-interpreter ~= 1.0.0",
]
dynamic = ["version"]
@@ -102,14 +108,15 @@ khoj = "khoj.main:run"
[project.optional-dependencies]
prod = [
"gunicorn == 22.0.0",
"google-auth == 2.23.3",
"stripe == 7.3.0",
"twilio == 8.11",
"boto3 >= 1.34.57",
"resend == 1.0.1",
]
local = [
"pgserver == 0.1.4",
]
dev = [
"khoj[prod]",
"khoj[prod,local]",
"pytest >= 7.1.2",
"pytest-xdist[psutil]",
"pytest-django == 4.5.2",
@@ -119,6 +126,9 @@ dev = [
"mypy >= 1.0.1",
"black >= 23.1.0",
"pre-commit >= 3.0.4",
"gitpython ~= 3.1.43",
"datasets",
"pandas",
]
[tool.hatch.version]

View File

@@ -0,0 +1,211 @@
/*
* Copyright 2019 Google Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import groovy.xml.MarkupBuilder
plugins {
id 'com.android.application'
}
def twaManifest = [
applicationId: 'dev.khoj.app',
hostName: 'app.khoj.dev', // The domain being opened in the TWA.
launchUrl: '/', // The start path for the TWA. Must be relative to the domain.
name: 'Khoj AI', // The application name.
launcherName: 'Khoj', // The name shown on the Android Launcher.
themeColor: '#FFFFFF', // The color used for the status bar.
themeColorDark: '#000000', // The color used for the dark status bar.
navigationColor: '#000000', // The color used for the navigation bar.
navigationColorDark: '#000000', // The color used for the dark navbar.
navigationDividerColor: '#000000', // The navbar divider color.
navigationDividerColorDark: '#000000', // The dark navbar divider color.
backgroundColor: '#FFFFFF', // The color used for the splash screen background.
enableNotifications: true, // Set to true to enable notification delegation.
// Every shortcut must include the following fields:
// - name: String that will show up in the shortcut.
// - short_name: Shorter string used if |name| is too long.
// - url: Absolute path of the URL to launch the app with (e.g '/create').
// - icon: Name of the resource in the drawable folder to use as an icon.
shortcuts: [],
// The duration of fade out animation in milliseconds to be played when removing splash screen.
splashScreenFadeOutDuration: 300,
generatorApp: 'bubblewrap-cli', // Application that generated the Android Project
// The fallback strategy for when Trusted Web Activity is not available. Possible values are
// 'customtabs' and 'webview'.
fallbackType: 'customtabs',
enableSiteSettingsShortcut: 'true',
orientation: 'natural',
]
android {
compileSdkVersion 35
namespace "dev.khoj.app"
defaultConfig {
applicationId "dev.khoj.app"
minSdkVersion 19
targetSdkVersion 35
versionCode 4
versionName "4"
// The name for the application
resValue "string", "appName", twaManifest.name
// The name for the application on the Android Launcher
resValue "string", "launcherName", twaManifest.launcherName
// The URL that will be used when launching the TWA from the Android Launcher
def launchUrl = "https://" + twaManifest.hostName + twaManifest.launchUrl
resValue "string", "launchUrl", launchUrl
// The URL the Web Manifest for the Progressive Web App that the TWA points to. This
// is used by Chrome OS and Meta Quest to open the Web version of the PWA instead of
// the TWA, as it will probably give a better user experience for non-mobile devices.
resValue "string", "webManifestUrl", 'https://app.khoj.dev/static/khoj.webmanifest'
// This is used by Meta Quest.
resValue "string", "fullScopeUrl", 'https://app.khoj.dev/'
// The hostname is used when building the intent-filter, so the TWA is able to
// handle Intents to open host url of the application.
resValue "string", "hostName", twaManifest.hostName
// This attribute sets the status bar color for the TWA. It can be either set here or in
// `res/values/colors.xml`. Setting in both places is an error and the app will not
// compile. If not set, the status bar color defaults to #FFFFFF - white.
resValue "color", "colorPrimary", twaManifest.themeColor
// This attribute sets the dark status bar color for the TWA. It can be either set here or in
// `res/values/colors.xml`. Setting in both places is an error and the app will not
// compile. If not set, the status bar color defaults to #000000 - white.
resValue "color", "colorPrimaryDark", twaManifest.themeColorDark
// This attribute sets the navigation bar color for the TWA. It can be either set here or
// in `res/values/colors.xml`. Setting in both places is an error and the app will not
// compile. If not set, the navigation bar color defaults to #FFFFFF - white.
resValue "color", "navigationColor", twaManifest.navigationColor
// This attribute sets the dark navigation bar color for the TWA. It can be either set here
// or in `res/values/colors.xml`. Setting in both places is an error and the app will not
// compile. If not set, the navigation bar color defaults to #000000 - black.
resValue "color", "navigationColorDark", twaManifest.navigationColorDark
// This attribute sets the navbar divider color for the TWA. It can be either
// set here or in `res/values/colors.xml`. Setting in both places is an error and the app
// will not compile. If not set, the divider color defaults to #00000000 - transparent.
resValue "color", "navigationDividerColor", twaManifest.navigationDividerColor
// This attribute sets the dark navbar divider color for the TWA. It can be either
// set here or in `res/values/colors.xml`. Setting in both places is an error and the
//app will not compile. If not set, the divider color defaults to #000000 - black.
resValue "color", "navigationDividerColorDark", twaManifest.navigationDividerColorDark
// Sets the color for the background used for the splash screen when launching the
// Trusted Web Activity.
resValue "color", "backgroundColor", twaManifest.backgroundColor
// Defines a provider authority for the Splash Screen
resValue "string", "providerAuthority", twaManifest.applicationId + '.fileprovider'
// The enableNotification resource is used to enable or disable the
// TrustedWebActivityService, by changing the android:enabled and android:exported
// attributes
resValue "bool", "enableNotification", twaManifest.enableNotifications.toString()
twaManifest.shortcuts.eachWithIndex { shortcut, index ->
resValue "string", "shortcut_name_$index", "$shortcut.name"
resValue "string", "shortcut_short_name_$index", "$shortcut.short_name"
}
// The splashScreenFadeOutDuration resource is used to set the duration of fade out animation in milliseconds
// to be played when removing splash screen. The default is 0 (no animation).
resValue "integer", "splashScreenFadeOutDuration", twaManifest.splashScreenFadeOutDuration.toString()
resValue "string", "generatorApp", twaManifest.generatorApp
resValue "string", "fallbackType", twaManifest.fallbackType
resValue "bool", "enableSiteSettingsShortcut", twaManifest.enableSiteSettingsShortcut
resValue "string", "orientation", twaManifest.orientation
}
buildTypes {
release {
minifyEnabled true
}
}
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
lintOptions {
checkReleaseBuilds false
}
}
task generateShorcutsFile {
assert twaManifest.shortcuts.size() < 5, "You can have at most 4 shortcuts."
twaManifest.shortcuts.eachWithIndex { s, i ->
assert s.name != null, 'Missing `name` in shortcut #' + i
assert s.short_name != null, 'Missing `short_name` in shortcut #' + i
assert s.url != null, 'Missing `icon` in shortcut #' + i
assert s.icon != null, 'Missing `url` in shortcut #' + i
}
def shortcutsFile = new File("$projectDir/src/main/res/xml", "shortcuts.xml")
def xmlWriter = new StringWriter()
def xmlMarkup = new MarkupBuilder(new IndentPrinter(xmlWriter, " ", true))
xmlMarkup
.'shortcuts'('xmlns:android': 'http://schemas.android.com/apk/res/android') {
twaManifest.shortcuts.eachWithIndex { s, i ->
'shortcut'(
'android:shortcutId': 'shortcut' + i,
'android:enabled': 'true',
'android:icon': '@drawable/' + s.icon,
'android:shortcutShortLabel': '@string/shortcut_short_name_' + i,
'android:shortcutLongLabel': '@string/shortcut_name_' + i) {
'intent'(
'android:action': 'android.intent.action.MAIN',
'android:targetPackage': twaManifest.applicationId,
'android:targetClass': twaManifest.applicationId + '.LauncherActivity',
'android:data': s.url)
'categories'('android:name': 'android.intent.category.LAUNCHER')
}
}
}
shortcutsFile.text = xmlWriter.toString() + '\n'
}
preBuild.dependsOn(generateShorcutsFile)
repositories {
}
dependencies {
implementation fileTree(include: ['*.jar'], dir: 'libs')
implementation 'com.google.androidbrowserhelper:locationdelegation:1.1.1'
implementation 'com.google.androidbrowserhelper:androidbrowserhelper:2.5.0'
}

View File

@@ -0,0 +1,188 @@
<!--
Copyright 2019 Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!-- The "package" attribute is rewritten by the Gradle build with the value of applicationId.
It is still required here, as it is used to derive paths, for instance when referring
to an Activity by ".MyActivity" instead of the full name. If more Activities are added to the
application, the package attribute will need to reflect the correct path in order to use
the abbreviated format. -->
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="dev.khoj.app">
<uses-permission android:name="android.permission.POST_NOTIFICATIONS"/>
<application
android:name="Application"
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/appName"
android:manageSpaceActivity="com.google.androidbrowserhelper.trusted.ManageDataLauncherActivity"
android:supportsRtl="true"
android:theme="@android:style/Theme.Translucent.NoTitleBar">
<meta-data
android:name="asset_statements"
android:resource="@string/assetStatements" />
<meta-data
android:name="web_manifest_url"
android:value="@string/webManifestUrl" />
<meta-data
android:name="twa_generator"
android:value="@string/generatorApp" />
<activity android:name="com.google.androidbrowserhelper.trusted.ManageDataLauncherActivity">
<meta-data
android:name="android.support.customtabs.trusted.MANAGE_SPACE_URL"
android:value="@string/launchUrl" />
</activity>
<activity android:name="LauncherActivity"
android:alwaysRetainTaskState="true"
android:label="@string/launcherName"
android:exported="true">
<meta-data android:name="android.support.customtabs.trusted.DEFAULT_URL"
android:value="@string/launchUrl" />
<meta-data
android:name="android.support.customtabs.trusted.STATUS_BAR_COLOR"
android:resource="@color/colorPrimary" />
<meta-data
android:name="android.support.customtabs.trusted.STATUS_BAR_COLOR_DARK"
android:resource="@color/colorPrimaryDark" />
<meta-data
android:name="android.support.customtabs.trusted.NAVIGATION_BAR_COLOR"
android:resource="@color/navigationColor" />
<meta-data
android:name="android.support.customtabs.trusted.NAVIGATION_BAR_COLOR_DARK"
android:resource="@color/navigationColorDark" />
<meta-data
android:name="androix.browser.trusted.NAVIGATION_BAR_DIVIDER_COLOR"
android:resource="@color/navigationDividerColor" />
<meta-data
android:name="androix.browser.trusted.NAVIGATION_BAR_DIVIDER_COLOR_DARK"
android:resource="@color/navigationDividerColorDark" />
<meta-data android:name="android.support.customtabs.trusted.SPLASH_IMAGE_DRAWABLE"
android:resource="@drawable/splash"/>
<meta-data android:name="android.support.customtabs.trusted.SPLASH_SCREEN_BACKGROUND_COLOR"
android:resource="@color/backgroundColor"/>
<meta-data android:name="android.support.customtabs.trusted.SPLASH_SCREEN_FADE_OUT_DURATION"
android:value="@integer/splashScreenFadeOutDuration"/>
<meta-data android:name="android.support.customtabs.trusted.FILE_PROVIDER_AUTHORITY"
android:value="@string/providerAuthority"/>
<meta-data android:name="android.app.shortcuts" android:resource="@xml/shortcuts" />
<meta-data android:name="android.support.customtabs.trusted.FALLBACK_STRATEGY"
android:value="@string/fallbackType" />
<meta-data android:name="android.support.customtabs.trusted.SCREEN_ORIENTATION"
android:value="@string/orientation"/>
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
<intent-filter android:autoVerify="true">
<action android:name="android.intent.action.VIEW"/>
<category android:name="android.intent.category.DEFAULT" />
<category android:name="android.intent.category.BROWSABLE"/>
<data android:scheme="https"
android:host="@string/hostName"/>
</intent-filter>
</activity>
<activity android:name="com.google.androidbrowserhelper.trusted.FocusActivity" />
<activity android:name="com.google.androidbrowserhelper.trusted.WebViewFallbackActivity"
android:configChanges="orientation|screenSize" />
<provider
android:name="androidx.core.content.FileProvider"
android:authorities="@string/providerAuthority"
android:grantUriPermissions="true"
android:exported="false">
<meta-data
android:name="android.support.FILE_PROVIDER_PATHS"
android:resource="@xml/filepaths" />
</provider>
<service
android:name=".DelegationService"
android:enabled="@bool/enableNotification"
android:exported="@bool/enableNotification">
<meta-data
android:name="android.support.customtabs.trusted.SMALL_ICON"
android:resource="@drawable/ic_notification_icon" />
<intent-filter>
<action android:name="android.support.customtabs.trusted.TRUSTED_WEB_ACTIVITY_SERVICE"/>
<category android:name="android.intent.category.DEFAULT"/>
</intent-filter>
</service>
<activity android:name="com.google.androidbrowserhelper.trusted.NotificationPermissionRequestActivity" />
<activity android:name=
"com.google.androidbrowserhelper.locationdelegation.PermissionRequestActivity"/>
</application>
</manifest>

View File

@@ -0,0 +1,29 @@
/*
* Copyright 2020 Google Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package dev.khoj.app;
public class Application extends android.app.Application {
@Override
public void onCreate() {
super.onCreate();
}
}

View File

@@ -0,0 +1,17 @@
package dev.khoj.app;
import com.google.androidbrowserhelper.locationdelegation.LocationDelegationExtraCommandHandler;
public class DelegationService extends
com.google.androidbrowserhelper.trusted.DelegationService {
@Override
public void onCreate() {
super.onCreate();
registerExtraCommandHandler(new LocationDelegationExtraCommandHandler());
}
}

View File

@@ -0,0 +1,54 @@
/*
* Copyright 2020 Google Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package dev.khoj.app;
import android.content.pm.ActivityInfo;
import android.net.Uri;
import android.os.Build;
import android.os.Bundle;
public class LauncherActivity
extends com.google.androidbrowserhelper.trusted.LauncherActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
// Setting an orientation crashes the app due to the transparent background on Android 8.0
// Oreo and below. We only set the orientation on Oreo and above. This only affects the
// splash screen and Chrome will still respect the orientation.
// See https://github.com/GoogleChromeLabs/bubblewrap/issues/496 for details.
if (Build.VERSION.SDK_INT > Build.VERSION_CODES.O) {
setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_UNSPECIFIED);
} else {
setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_UNSPECIFIED);
}
}
@Override
protected Uri getLaunchingUrl() {
// Get the original launch Url.
Uri uri = super.getLaunchingUrl();
return uri;
}
}

View File

@@ -0,0 +1,25 @@
<!--
Copyright 2020 Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<inset xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:aapt="http://schemas.android.com/aapt"
android:inset="2dp">
<aapt:attr name="android:drawable">
<shape android:shape="oval">
<solid android:color="@color/shortcut_background" />
<size android:width="44dp" android:height="44dp" />
</shape>
</aapt:attr>
</inset>

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 678 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

View File

@@ -0,0 +1 @@
{"name":"Khoj","short_name":"Khoj","display":"standalone","start_url":"/","description":"The open, personal AI for your digital brain. You can ask Khoj to draft a message, paint your imagination, find information on the internet and even answer questions from your documents.","theme_color":"#ffffff","background_color":"#ffffff","icons":[{"src":"/static/assets/icons/khoj_lantern_128x128.png","sizes":"128x128","type":"image/png"},{"src":"/static/assets/icons/khoj_lantern_256x256.png","sizes":"256x256","type":"image/png"}],"screenshots":[{"src":"/static/assets/samples/phone-remember-plan-sample.png","sizes":"419x900","type":"image/png","form_factor":"narrow","label":"Remember and Plan"},{"src":"/static/assets/samples/phone-browse-draw-sample.png","sizes":"419x900","type":"image/png","form_factor":"narrow","label":"Browse and Draw"},{"src":"/static/assets/samples/desktop-remember-plan-sample.png","sizes":"1260x742","type":"image/png","form_factor":"wide","label":"Remember and Plan"},{"src":"/static/assets/samples/desktop-browse-draw-sample.png","sizes":"1260x742","type":"image/png","form_factor":"wide","label":"Browse and Draw"}]}

View File

@@ -0,0 +1,18 @@
<!--
Copyright 2020 Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<resources>
<color name="shortcut_background">#F5F5F5</color>
</resources>

Some files were not shown because too many files have changed in this diff Show More