File tree Expand file tree Collapse file tree 1 file changed +23
-5
lines changed Expand file tree Collapse file tree 1 file changed +23
-5
lines changed Original file line number Diff line number Diff line change @@ -151,6 +151,20 @@ print(interpreter.system_message)
151
151
152
152
### Change the Model
153
153
154
+ For ` gpt-3.5-turbo ` , use fast mode:
155
+
156
+ ``` shell
157
+ interpreter --fast
158
+ ```
159
+
160
+ In Python, you will need to set the model manually:
161
+
162
+ ``` python
163
+ interpreter.model = " gpt-3.5-turbo"
164
+ ```
165
+
166
+ ### Running Open Interpreter locally
167
+
154
168
ⓘ ** Issues running locally?** Read our new [ GPU setup guide] ( /docs/GPU.md ) and [ Windows setup guide] ( /docs/WINDOWS.md ) .
155
169
156
170
You can run ` interpreter ` in local mode from the command line to use ` Code Llama ` :
@@ -159,16 +173,20 @@ You can run `interpreter` in local mode from the command line to use `Code Llama
159
173
interpreter --local
160
174
```
161
175
162
- For ` gpt-3.5-turbo ` , use fast mode :
176
+ Or run any HugginFace model ** locally ** by using its repo ID (e.g. "tiiuae/falcon-180B") :
163
177
164
178
``` shell
165
- interpreter --fast
179
+ interpreter --model tiiuae/falcon-180B
166
180
```
167
181
168
- In Python, you will need to set the model manually:
182
+ #### Local model params
169
183
170
- ``` python
171
- interpreter.model = " gpt-3.5-turbo"
184
+ You can easily modify the ` max_tokens ` and ` context_window ` (in tokens) of locally running models.
185
+
186
+ Smaller context windows will use less RAM, so we recommend trying a shorter window if GPU is failing.
187
+
188
+ ``` shell
189
+ interpreter --max_tokens 2000 --context_window 16000
172
190
```
173
191
174
192
### Azure Support
You can’t perform that action at this time.
0 commit comments