ollamaui - Ollama WebUI AI


https://hub.docker.com/r/ollamawebui/ollama-webui

https://github.com/ollama-webui/ollama-webui

https://openwebui.com/


-0.3.21

docker run -d  -p 98:8080  --name ollamaui --hostname ollamaui  --gpus=all  --restart always --network dify_default   --ip 172.19.0.35  -v /etc/localtime:/etc/localtime:ro  -v /etc/timezone:/etc/timezone:ro --privileged=true --user=root   -e TZ='Asia/Shanghai'   --ulimit nofile=262144:262144   -v  /data/file:/data/file/  -e OLLAMA_BASE_URLS='http://ollama:11434'   -e COMFYUI_BASE_URL='http://192.168.1.6:8188/' -e ENABLE_IMAGE_GENERATION=True -v /data/site/docker/data/ollamaui:/app/backend/data  -e WEBUI_NAME='HtmlToo' -e ENABLE_SIGNUP=True -e  ENABLE_OLLAMA_API=True  -e HF_ENDPOINT='https://hf-mirror.com'  -e local_files_only=True  -e HF_HUB_OFFLINE=0 --link ollama  ghcr.io/open-webui/open-webui:cuda


docker run -d  -p 98:8080  --name ollamaui --hostname ollamaui  --restart always --network mgr  --ip 172.18.0.35  -v /etc/localtime:/etc/localtime:ro  -v /etc/timezone:/etc/timezone:ro --privileged=true --user=root   -e TZ='Asia/Shanghai'   --ulimit nofile=262144:262144   -v  /data/file:/data/file/  -e OLLAMA_BASE_URLS='http://ollama:11434'   -e COMFYUI_BASE_URL='http://192.168.1.6:8188/' -e ENABLE_IMAGE_GENERATION=True -v /data/site/docker/data/ollamaui:/app/backend/data  -e WEBUI_NAME='HtmlToo' -e ENABLE_SIGNUP=False -e  ENABLE_OLLAMA_API=True  --link ollama  ghcr.io/open-webui/open-webui:main


docker exec -it ollamaui  /bin/bash


http://192.168.1.6:98


docker pull  ghcr.io/open-webui/open-webui:cuda

docker save  ghcr.io/open-webui/open-webui:cuda | gzip > /data/site/htmltoo.f/htmltoo.up/soft/docker.tar/open-webui-cuda-0.3.21.tar.gz

docker load < /data/docker.tar/open-webui-cuda-0.3.21.tar.gz.tar.gz


- Up-to-Date

docker run --rm -v /var/run/docker.sock:/var/run/docker.sock  containrrr/watchtower --run-once ollamaui


-backend/config.py

WEBUI_AUTH

默认设置:True,   -> False # 禁用会员系统


-e OLLAMA_BASE_URL='http://ollama:11434'

-e OLLAMA_API_BASE_URL='http://ollama:11434/api'

-e WEBUI_SECRET_KEY='TkjGEiQ23@5K^j'

-

OLLAMA_BASE_URLS="http://ollama-one:11434;http://ollama-two:11434"


# 图象生成

--stable-diffusion-webui

-e AUTOMATIC1111_BASE_URL='http://192.168.1.6:7860/' -e ENABLE_IMAGE_GENERATION=True

Admin Panel -> Settings -> Images ->  Image Generation Engine field  -> Default (Automatic1111)

API URL:  http://192.168.1.6:7860/

--ComfyUI

-e COMFYUI_BASE_URL='http://192.168.1.6:8188/' -e ENABLE_IMAGE_GENERATION=True

Admin Panel -> Settings -> Images -> Image Generation Engine field  ->  ComfyUI

API URL:  http://192.168.1.6:8188/


# 文本到语音:openedai-speech-integration

-GPU

docker run -d --name speech  --hostname speech  --gpus=all -p 8000:8000 --restart always --network dify_default   --ip 172.19.0.36  -v /etc/localtime:/etc/localtime:ro   -v /etc/timezone:/etc/timezone:ro --privileged=true --user=root   -e TZ='Asia/Shanghai'   --ulimit nofile=262144:262144   -v  /data/file:/data/file/ -v /data/site/docker/data/openedai-speech/voices:/app/voices -v /data/site/docker/data/openedai-speech/config:/app/config  ghcr.io/matatonic/openedai-speech:latest

-CPU

docker run -d --name speech  --hostname speech  -p 8000:8000 --restart always --network dify_default   --ip 172.19.0.36  -v /etc/localtime:/etc/localtime:ro  -v /etc/timezone:/etc/timezone:ro  --privileged=true --user=root   -e TZ='Asia/Shanghai'   --ulimit nofile=262144:262144   -v  /data/file:/data/file/ -v /data/site/docker/data/openedai-speech/voices:/app/voices -v /data/site/docker/data/openedai-speech/config:/app/config   ghcr.io/matatonic/openedai-speech-min:latest

-配置

Admin Panel > Settings > Audio -> http://192.168.1.6:8000/v1


# browserless

docker run -d  --name browsing --hostname browsing  --restart always  -v /etc/localtime:/etc/localtime:ro   --privileged=true --user=root   -e TZ='Asia/Shanghai'   --ulimit nofile=262144:262144 --network dify_default  --ip 172.19.0.37  -v  /data/file:/data/file/  -e MAX_CONCURRENT_SESSIONS='10'  hub.htmltoo.com:5000/bigdata:browserless

docker run -d  --name browsing --hostname browsing  --restart always  -v /etc/localtime:/etc/localtime:ro   --privileged=true --user=root   -e TZ='Asia/Shanghai'   --ulimit nofile=262144:262144 --network dify_default  --ip 172.19.0.37  -v  /data/file:/data/file/  -e MAX_CONCURRENT_SESSIONS='10' browserless/chrome:latest


# 压缩

docker pull  ghcr.io/open-webui/open-webui:cuda
docker pull  ghcr.io/matatonic/openedai-speech:latest
docker save  ghcr.io/open-webui/open-webui:cuda | gzip > /data/site/htmltoo.f/htmltoo.up/soft/docker.tar/ollama-webui-cuda.tar.gz
docker save  ghcr.io/matatonic/openedai-speech:latest | gzip > /data/site/htmltoo.f/htmltoo.up/soft/docker.tar/openedai-speech.tar.gz
docker rmi ollamawebui/ollama-webui:cuda
docker rmi ghcr.io/matatonic/openedai-speech:latest
docker load < /data/site/docker.tar/ollama-webui-cuda.tar.gz


API Key

config.json

"provider": "openai"

"apiBase": "http://localhost:3000/ollama/v1"

"apiKey": "sk-79970662256d425eb274fc4563d4525b"

WebUI ->设置->帐户-> api


# LiteLLM 配置

https://docs.openwebui.com/tutorial/litellm

https://github.com/BerriAI/litellm

https://docs.litellm.ai/docs/proxy/configs

-v /data/site/docker/data/ollamaui/litellm/config.yaml:/app/backend/data/litellm/config.yaml

vim  /data/site/docker/data/ollamaui/litellm/config.yaml

model_list:
  - model_name: gpt-3.5-turbo ### RECEIVED MODEL NAME ###
    litellm_params: # all params accepted by litellm.completion() - https://docs.litellm.ai/docs/completion/input
      model: azure/gpt-turbo-small-eu ### MODEL NAME sent to `litellm.completion()` ###
      api_base: https://my-endpoint-europe-berri-992.openai.azure.com/
      api_key: "os.environ/AZURE_API_KEY_EU" # does os.getenv("AZURE_API_KEY_EU")
      rpm: 6      # [OPTIONAL] Rate limit for this deployment: in requests per minute (rpm)
  - model_name: bedrock-claude-v1 
    litellm_params:
      model: bedrock/anthropic.claude-instant-v1
  - model_name: gpt-3.5-turbo
    litellm_params:
      model: azure/gpt-turbo-small-ca
      api_base: https://my-endpoint-canada-berri992.openai.azure.com/
      api_key: "os.environ/AZURE_API_KEY_CA"
      rpm: 6
  - model_name: anthropic-claude
    litellm_params: 
      model: bedrock/anthropic.claude-instant-v1
      ### [OPTIONAL] SET AWS REGION ###
      aws_region_name: us-east-1
  - model_name: vllm-models
    litellm_params:
      model: openai/facebook/opt-125m # the `openai/` prefix tells litellm it's openai compatible
      api_base: http://0.0.0.0:4000/v1
      api_key: none
      rpm: 1440
    model_info: 
      version: 2
  
  # Use this if you want to make requests to `claude-3-haiku-20240307`,`claude-3-opus-20240229`,`claude-2.1` without defining them on the config.yaml
  # Default models
  # Works for ALL Providers and needs the default provider credentials in .env
  - model_name: "*" 
    litellm_params:
      model: "*"
litellm_settings: # module level litellm settings - https://github.com/BerriAI/litellm/blob/main/litellm/__init__.py
  drop_params: True
  success_callback: ["langfuse"] # OPTIONAL - if you want to start sending LLM Logs to Langfuse. Make sure to set `LANGFUSE_PUBLIC_KEY` and `LANGFUSE_SECRET_KEY` in your env
general_settings: 
  master_key: sk-1234 # [OPTIONAL] Only use this if you to require all calls to contain this key (Authorization: Bearer sk-1234)
  alerting: ["slack"] # [OPTIONAL] If you want Slack Alerts for Hanging LLM requests, Slow llm responses, Budget Alerts. Make sure to set `SLACK_WEBHOOK_URL` in your env



grep -rn 'ollama' /app/

grep -rn 'LLMs' /app/

grep -rn 'Open' /app/

sed -i "s/Ollama/HtmlToo/g"  `grep -rl "Ollama" /app`;

sed -i "s/Open WebUI/HtmlToo WebUI/g"  `grep -rl "Open WebUI" /app`;

sed -i "s/Ollama/HtmlToo/g"  `grep -rl "Ollama" /app`;

sed -i "s/Open WebUI/HtmlToo WebUI/g"  `grep -rl "Open WebUI" /app`;

sed -i "s/LLMs/HtmlToo/g"  `grep -rl "LLMs" /app/`;

sed -i "s/Open WebUI/HtmlToo WebUI/g"  `grep -rl "Open WebUI" /app`;

sed -i "s/WebUI/HtmlToo UI/g"  `grep -rl "WebUI" /app`;

sed -i "s/discord/htmltoo/g"  `grep -rl "discord" /app`;

sed -i "s/github.com/htmltoo.com/g"  `grep -rl "github.com" /app`;

sed -i "s/ollama-webui/htmltoo-webui/g"  `grep -rl "ollama-webui" /app`;

sed -i "s/Ollama API URL/htmltoo API URL/g"  `grep -rl "Ollama API URL" /app`;

sed -i "s/gg\/5rJgQTnV4s/com/g"  `grep -rl "gg/5rJgQTnV4s" /app`;

sed -i "s/https:\/\/htmltoo.com/http:\/\/htmltoo.com/g"  `grep -rl "https://htmltoo.com" /app`;

sed -i "s/htmltoo-webui\/htmltoo-webui//g"  `grep -rl "htmltoo-webui/htmltoo-webui" /app`;

sed -i "s/Ollama.com/htmltoo.com/g"  `grep -rl "Ollama.com" /app`;

sed -i "s/Timothy J. Baek/QQ:522588122/g"  `grep -rl "Timothy J. Baek" /app`;

sed -i "s/\/tjbck//g"  `grep -rl "/tjbck" /app`;

sed -i "s/Ollama Version/HtmlToo Version/g"  `grep -rl "Ollama Version" /app`;



签名:这个人很懒,什么也没有留下!
最新回复 (0)
返回