aboutsummaryrefslogtreecommitdiffstats
path: root/README.md
blob: 9f66f6e690262f1e20f92170d64f915555da4247 (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
# milla

Milla is an IRC bot that sends things over to an AI model when you ask it questions, prints the answer with syntax-hilighting.<br/>
Currently Supported Models:

- Ollama
- Openai
- Gemini

![milla](./milla.png)

### Config

config:

```toml
ircServer = "irc.terminaldweller.com"
ircPort = 6697
ircNick = "mybot"
ircSaslUser = "mybot"
ircSaslPass = "mypass"
ircChannel = "#mychannel"
ollamaEndpoint = ""
temp = 0.2
ollamaSystem = ""
requestTimeout = 10
millaReconnectDelay = 60
enableSasl = true
model = "llama2-uncensored"
chromaStyle = "rose-pine-moon"
chromaFormatter = "terminal256"
provider = "ollama" # ollama, chatgpt, gemini 
apikey = "key"
topP = 0.9
topK = 20
```

### Deploy

You can use the provided compose file:<br/>

```yaml
version: "3.9"
services:
  milla:
    image: milla
    build:
      context: .
    deploy:
      resources:
        limits:
          memory: 64M
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
    networks:
      - millanet
    restart: unless-stopped
    command: ["--config", "/opt/milla/config.toml"]
    volumes:
      - ./config.toml:/opt/milla/config.toml
    cap_drop:
      - ALL
    dns:
      - 9.9.9.9
    environment:
      - SERVER_DEPLOYMENT_TYPE=deployment
    entrypoint: ["/milla/milla"]
networks:
  millanet:
```