Get a Taste of LLMs from GPT4All

Large language models have become popular recently. ChatGPT is fashionable. Trying out ChatGPT to understand what LLMs are about is easy, but sometimes, you may want an offline alternative that can run on your computer. In this post, you will learn about GPT4All as an LLM that you can install on your computer. In particular, you will learn

  • What is GPT4All
  • How to install the desktop client for GPT4All
  • How to run GPT4All in Python

Get started and apply ChatGPT with my book Maximizing Productivity with ChatGPT. It provides real-world use cases and prompt examples designed to get you using ChatGPT quickly.


Let’s get started.

Get a Taste of LLMs from GPT4All
Picture generated by the author using Stable Diffusion. Some rights reserved.

 

Updates:

  • 2023-10-10: Refreshed the Python code for gpt4all module version 1.0.12

Overview

This post is divided into three parts; they are:

  • What is GPT4All?
  • How to get GPT4All
  • How to use GPT4All in Python

What is GPT4All?

The term “GPT” is derived from the title of a 2018 paper, “Improving Language Understanding by Generative Pre-Training” by Radford et al. This paper describes how transformer models are demonstrated to be able to understand human language.

Since then, many people attempted to develop language models using transformer architecture, and it has been found that a model large enough can give excellent results. However, many of the models developed are proprietary. There are either provided as a service with paid subscription or under a license with certain restrictive terms. Some are even impossible to run on commodity hardware due to is size.

GPT4All project tried to make the LLMs available to the public on common hardware. It allows you to train and deploy your model. Pretrained models are also available, with a small size that can reasonably run on a CPU.

How to get GPT4All

Let’s focus only on using the pre-trained models.

At the time of writing, GPT4All is available from https://gpt4all.io/index.html, which you can run as a desktop application or using a Python library. You can download the installer for your OS to run a desktop client. The client is only a few hundred MB. You should see an installation screen as follows:

After you have the client installed, launching it the first time will prompt you to install a model, which can be as large as many GB. To start, you may pick “gpt4all-j-v1.3-groovy” (the GPT4All-J model). It is a relatively small but popular model.

Once the client and model are ready, you can type your message in the input box. The model may expect a specific form of input, e.g., a particular language or style. This model expects a conversation style (like ChatGPT) and generally handles English well. For example, below is how it responds to the input “Give me a list of 10 colors and their RGB code”:

How to use GPT4All in Python

The key component of GPT4All is the model. The desktop client is merely an interface to it. Besides the client, you can also invoke the model through a Python library.

The library is unsurprisingly named “gpt4all,” and you can install it with pip command:

Note: This is a fast-moving library and the functions may change. The following code has been tested on version 1.0.12 but it may not work in future versions.

Afterward, you can use it in Python in just a few lines of code:

Running the above code will download the model file if you haven’t yet. Afterward, the model is loaded, input is provided, and the response is returned as a string. The output printed may be:

The chat history of the session is stored in the model’s attribute current_chat_session as a Python list. An example is as follows:

The history is a sequence of dialog in the format of Python dictionaries with keys role and content. The role can be "system", "assistant", or "user", while content is a string of text. If you’re chatting with your model like the example, your role is "user" while the computer’s response is "assistant". You can keep using the generate() call to continue your conversation. Below is an example:

Note that you invoked the model multiple times in the for-loop. Each time it responded, the model took the output and appended it to the list of chat messages so you accumulated the context. Then you add a new dialog and invoke the model again. This is how the model remember the chat history. Below is an example of how the above code respond to your questions:

Therefore, the chat history accumulated by the end of the above code would be the following:

You may get a better result from another model. You may also get a different result due to the randomness in the model.

Summary

GPT4All is a nice tool you can play with on your computer. It allows you to explore the interaction with a large language model and help you better understand the capability and limitation of a model. In this post, you learned that:

  • GPT4All has a desktop client that you can install on your computer
  • GPT4All has a Python interface that allows you to interact with a language model in code
  • There are multiple language model available

Maximize Your Productivity with ChatGPT!

Maximizing Productivity with ChatGPT

Let Generative AI Help You Work Smarter

...by leveraging the power of advanced AI from ChatGPT, Google Bard, and many other tools online

Discover how in my new Ebook:
Maximizing Productivity with ChatGPT

It provides great tips with examples of all kinds to make you the boss of AI robots
for brainstorming, editing, expert helper, translator, and much more...

Make AI work for you with my latest book


See What's Inside

31 Responses to Get a Taste of LLMs from GPT4All

  1. Avatar
    Michael May 26, 2023 at 6:44 am #

    Using GPT4All is definitely one of the easiest ways to install an LLM model on a computer. The newest models you can download work quite well, not quite GPT-4 level but getting there, and over the next few months they will only get better. I like how by ticking the ‘enable web server’ check box you can set this up as an API service to allow for embedding into applications.

  2. Avatar
    K hwang May 27, 2023 at 3:14 pm #

    Hi

    It is good to try gpt4all

    However, I got the following strange characters from responses.

    How could I correct this error?

    thanks a lot

    ### Prompt:
    Name 3 colors
    ### Response:
    &64!!7G2;%&C8**”,GAH@)E$<A-E981)$;8(90BD;;4::=.,GABD2-61&4H$36!0);&&.7<=(E,%D:)
    {'model': 'ggml-gpt4all-j-v1.3-groovy',
    'usage': {'prompt_tokens': 239,
    'completion_tokens': 128,
    'total_tokens': 367},
    'choices': [{'message': {'role': 'assistant',
    'content': '&64!!7G2;%&C8**”,GAH@)E$<A-E981)$;8(90BD;;4::=.,GABD2-61&4H$36!0);&&.7<=(E,%D:)'}}]}

    • Avatar
      James Carmichael May 28, 2023 at 6:09 am #

      Hi K hwang…You may wish to try your model in Google Colab to rule out any issues with your local environment.

  3. Avatar
    K hwang May 30, 2023 at 12:52 pm #

    Thanks.
    When I tried in the COLAB, it was Ok.

    With my notebook, I have still got the same problem.
    I am using Korean font now.

    Is it a font problem?

    • Adrian Tam
      Adrian Tam May 31, 2023 at 4:35 am #

      Font should not be a problem. But maybe you typed in “full-width” version of latin alphabets? e.g., ABC vs ABC

      • Avatar
        K hwang May 31, 2023 at 12:09 pm #

        Howdy Adrian

        I got the following responses when I attempted other Prompt (questions).

        I appreciate your kindness.

        Found model file.
        ### Instruction:
        The prompt below is a question to answer, a task to complete, or a conversation
        to respond to; decide which and write an appropriate response.

        ### Prompt:
        name capital for USA
        ### Response:
        %(9BH-G,>!.50>8%FA,)E=499C2″”3+,:,-5-165;!@27$,9<=EA84!ACG1C4ECC5@<%+A,4"@H+3:-5"90F1$2:H!CH*F,+=$

        ### Instruction:
        The prompt below is a question to answer, a task to complete, or a conversation
        to respond to; decide which and write an appropriate response.

        ### Prompt:
        USA?
        ### Response:
        87957;;G,F%21D(C&,=94;&61<$=.9FDE4A81H&8F$@9@,$&*E%D,EHA1%&)G=H0GB$):2G2$139H*<<4:92A=C$<6A:,1-$.D8*7C

        ### Instruction:
        The prompt below is a question to answer, a task to complete, or a conversation
        to respond to; decide which and write an appropriate response.

        ### Prompt:
        who am I
        ### Response:
        ";D);)*)(8E2-3A54;8<,332$=22.8$+!$6(59)HH0=$>E7)=,A3>.@-F+A8<DE6&,9C%04H*2%$A517A=86:(59&G.*:9:
        {'model': 'ggml-gpt4all-j-v1.3-groovy',
        'usage': {'prompt_tokens': 234,
        'completion_tokens': 128,
        'total_tokens': 362},
        'choices': [{'message': {'role': 'assistant',
        'content': '";D);)*)(8E2-3A54;8<,332$=22.8$+!$6(59)HH0=$>E7)=,A3>.@-F+A8<DE6&,9C%04H*2%$A517A=86:(59&G.*:9:'}}]}

  4. Avatar
    John Warford May 31, 2023 at 1:43 am #

    I am having the same problem as K hwang. Iam using Windows 10 and the latest version of python and pycharm

    ### Prompt:
    Give me a list of 10 colors and their RGB code
    ### Response:
    71*F).D=&8;)6A&9B1″:&+1;H7:),E3+HGE4)$0H($8.0%GF.).H(5H06A37″=:2;;*0″2!H)>FE3+,@”@,,($5$93&5H+1&,!+<;53"16@)0()=!H%;9$;:
    {'model': 'ggml-gpt4all-j-v1.3-groovy', 'usage': {'prompt_tokens': 272, 'completion_tokens': 128, 'total_tokens': 400}, 'choices': [{'message': {'role': 'assistant', 'content': '71*F).D=&8;)6A&9B1":&+1;H7:),E3+HGE4)$0H($8.0%GF.).H(5H06A37″=:2;;*0″2!H)>FE3+,@”@,,($5$93&5H+1&,!+<;53"16@)0()=!H%;9$;:'}}]}

    Process finished with exit code 0

    • Avatar
      James Carmichael May 31, 2023 at 9:18 am #

      Hi John…Just curious…Have you tried your model in Google Colab?

      • Avatar
        John Warford May 31, 2023 at 4:25 pm #

        Hi James, my apologies I forgot to add that I was just using your first example from above. So I have now confirmed that it works in Colab, however I still get the same garbled output in pycharm, and at the command line uing a command prompt.

        —————
        import gpt4all

        gptj = gpt4all.GPT4All(“ggml-gpt4all-j-v1.3-groovy”)
        messages = [{“role”: “user”, “content”: “Give me a list of 10 colors and their RGB code”}]
        ret = gptj.chat_completion(messages)
        print(ret)

        100%|██████████| 3.79G/3.79G [01:29<00:00, 42.1MiB/s]
        Model downloaded at: /root/.cache/gpt4all/ggml-gpt4all-j-v1.3-groovy.bin
        ### Instruction:
        The prompt below is a question to answer, a task to complete, or a conversation
        to respond to; decide which and write an appropriate response.

        ### Prompt:
        Give me a list of 10 colors and their RGB code
        ### Response:
        Here is a list of 10 colors and their RGB code:Red (255, 0, 0) Blue (0, 255, 0) Green (0, 0, 255) Yellow (255, 255, 0) Orange (255, 127, 0) Purple (0, 128, 0) Pink (255, 192, 203) Gray (128, 128, 128) Black (0, 0, 0) White (255, 255, 255)
        {'model': 'ggml-gpt4all-j-v1.3-groovy', 'usage': {'prompt_tokens': 272, 'completion_tokens': 244, 'total_tokens': 516}, 'choices': [{'message': {'role': 'assistant', 'content': ' Here is a list of 10 colors and their RGB code:Red (255, 0, 0) Blue (0, 255, 0) Green (0, 0, 255) Yellow (255, 255, 0) Orange (255, 127, 0) Purple (0, 128, 0) Pink (255, 192, 203) Gray (128, 128, 128) Black (0, 0, 0) White (255, 255, 255)'}}]}
        ——-

    • Avatar
      Dee July 19, 2023 at 7:21 am #

      Hi John,

      I am also using Windows to run GPT4All on a local machine. But I just got a very strange error when importing GPT4All: ‘TypeError: ‘type’ object is not subscriptable’.

      I am just wondering if you have encountered this error before? My Python virtual environment is: Python v3.8.17 and GPT4All v1.0.5.

      I am not sure if this was caused by Python’s version and/or GPT4All version. Could you please let me know your Python and GPT4All versions?

      Thank you!

  5. Avatar
    John Warford May 31, 2023 at 4:28 pm #

    Here is my full console output
    python main.py
    Hi, PyCharm
    Found model file.
    gptj_model_load: loading model from ‘C:\\\\Users\\\\jwarfo01\\\\.cache\\\\gpt4all\\ggml-gpt4all-j-v1.3-groovy.bin’ – please wait …
    gptj_model_load: n_vocab = 50400
    gptj_model_load: n_ctx = 2048
    gptj_model_load: n_embd = 4096
    gptj_model_load: n_head = 16
    gptj_model_load: n_layer = 28
    gptj_model_load: n_rot = 64
    gptj_model_load: f16 = 2
    gptj_model_load: ggml ctx size = 5401.45 MB
    gptj_model_load: kv self size = 896.00 MB
    gptj_model_load: done
    gptj_model_load: model size = 123.05 MB / num tensors = 1
    ### Instruction:
    The prompt below is a question to answer, a task to complete, or a conversation
    to respond to; decide which and write an appropriate response.

    ### Prompt:
    Give me a list of 10 colors and their RGB code
    ### Response:
    EAB8;-50B3B3)19&G*-$%3FCB+;%,B:-=E:F&B*)1(@+2!18(*2>;,H)*C)3F8B494@3+%9A19!)F
    {‘model’: ‘ggml-gpt4all-j-v1.3-groovy’, ‘usage’: {‘prompt_tokens’: 272, ‘completion_tokens’: 128, ‘total_tokens’: 400}, ‘choices’: [{‘message’: {‘role’: ‘assistant’, ‘content’: ‘EAB8;-50B3B3)19&G*-$%3FCB+;%,B:-=E:F&B*)1(@+2!18(*2>;,H)*C)3F8B494@3+%9A19!)F’}}]}

  6. Avatar
    Achim May 31, 2023 at 5:35 pm #

    Hello, does anyone have any idea how to process additional context information?

    • Avatar
      James Carmichael June 1, 2023 at 5:16 am #

      Hi Achim…Please clarify what is meant by “process additional context information”. That will enable us to better assist you.

      • Avatar
        Achim June 2, 2023 at 1:05 am #

        Hi, I want to make a a similarity search on dokuments based on the question I want to pass to GPT4All: this should be used as a context for the question.

        -> I want to feed the question and the context to GPT4All

        Thanks

  7. Avatar
    John Warford May 31, 2023 at 7:28 pm #

    FYI: the same script runs fine on Ubuntu. I set it up under the windows Linux subsystem.

  8. Avatar
    catherine June 2, 2023 at 4:43 pm #

    I’ve noticed that the responses are sometimes cut off in the middle of a sentence. Is there a way to ensure that the complete response is returned when using gptm.generate and chat_complete?

    • Avatar
      James Carmichael June 3, 2023 at 11:40 am #

      Hi catherine…I am not familiar with that issue. Can you provide an example so that we can attempt to reproduce your result?

      Also, more infomation can be found here:

      https://gpt4all.io/index.html

  9. Avatar
    K hwang June 3, 2023 at 4:26 pm #

    Hi

    I solved the strange font problem when I downloaded the following file :

    file name —> ggml-gpt4all-j-v1.3-groovy.bin 3.69 GB

    The problem was that this ggml-gpt4all-j-v1.3-groovy.bin was not completely downloaded.
    I downloaded that problem file manually ~/.cache/gpt4all.

    enjoy your journey to a GPT world.

    ########### the correct responses follow ######################

    Found model file.
    ### Instruction:
    The prompt below is a question to answer, a task to complete, or a conversation
    to respond to; decide which and write an appropriate response.

    ### Prompt:
    Name 3 colors
    ### Response:
    Blue, Green and Red
    ### Instruction:
    The prompt below is a question to answer, a task to complete, or a conversation
    to respond to; decide which and write an appropriate response.

    ### Prompt:
    name capital for USA
    ### Response:
    The name capital for the United States is Washington D.C., also known as the “Capital of America.”
    ### Instruction:
    The prompt below is a question to answer, a task to complete, or a conversation
    to respond to; decide which and write an appropriate response.

    ### Prompt:
    USA?
    ### Response:
    I am from the United States.
    ### Instruction:
    The prompt below is a question to answer, a task to complete, or a conversation
    to respond to; decide which and write an appropriate response.

    ### Prompt:
    who am I
    ### Response:
    I’m a computer program designed to assist you in completing tasks and answering questions. I was created to help you with your daily tasks and answer any questions that may arise. I am programmed to understand and respond appropriately based on the context of your question. I am here to help you in any way that is possible.

  10. Avatar
    Ben Langley June 6, 2023 at 11:38 pm #

    Hello!

    I love your article, it’s an awesome read!

    I’ve been fiddling around with GPT4All recently and I wanted to ask a question based on something you said.
    Under the heading ‘What Is GPT4All?’ you wrote ‘It allows you to train and deploy your model’. Is training and deploying your own model really possible? I’ve been trying for ages but I can’t figure out how. I don’t suppose you could help me?

    Thanks in advance!

  11. Avatar
    John Warford June 11, 2023 at 5:03 am #

    Hi James, I am happy to report that after several attempts I was able to directly download all 3.6 GB of ggml-gpt4all-j-v1.3-groovy.bin

    My script runs fine now. Thanks for a great article.

    Like K hwang above: I did not realize that the original downlead had failed.

    Below is my successful output in Pycharm on Windows 10.

    Found model file.
    gptj_model_load: loading model from ‘C:\\\\Users\\\\jwarfo01\\\\.cache\\\\gpt4all\\ggml-gpt4all-j-v1.3-groovy.bin’ – please wait …
    gptj_model_load: n_vocab = 50400
    gptj_model_load: n_ctx = 2048
    gptj_model_load: n_embd = 4096
    gptj_model_load: n_head = 16
    gptj_model_load: n_layer = 28
    gptj_model_load: n_rot = 64
    gptj_model_load: f16 = 2
    gptj_model_load: ggml ctx size = 5401.45 MB
    gptj_model_load: kv self size = 896.00 MB
    gptj_model_load: …………………………….. done
    gptj_model_load: model size = 3609.38 MB / num tensors = 285
    ### Instruction:
    The prompt below is a question to answer, a task to complete, or a conversation
    to respond to; decide which and write an appropriate response.

    ### Prompt:
    Name 3 colors
    ### Response:
    Blue, Green and Red
    {‘model’: ‘ggml-gpt4all-j-v1.3-groovy’, ‘usage’: {‘prompt_tokens’: 239, ‘completion_tokens’: 20, ‘total_tokens’: 259}, ‘choices’: [{‘message’: {‘role’: ‘assistant’, ‘content’: ‘ Blue, Green and Red’}}]}

    Process finished with exit code 0

  12. Avatar
    flaming flamingo99 July 4, 2023 at 6:44 pm #

    Where can I get the available model names?

  13. Avatar
    Sanjay Dasgupta July 15, 2023 at 10:25 pm #

    Unfortunately, the gpt4all API is not yet stable, and the current version (1.0.5, as of 15th July 2023), is not compatible with the excellent example code in this article.

    But some fiddling with the API shows that the following changes (see the two new lines between the comments) may be useful:

    import gpt4all

    gptj = gpt4all.GPT4All(“ggml-gpt4all-j-v1.3-groovy”)
    messages = [{“role”: “user”, “content”: “Give me a list of 10 colors and their RGB code”}]
    # no changes above this comment
    prompt = gptj._format_chat_prompt_template(messages)
    response = gptj.generate(prompt)
    # no changes below this comment
    ret = gptj.chat_completion(messages)

  14. Avatar
    Dee July 19, 2023 at 2:43 am #

    Hi Adrian,

    Many thanks for introducing how to run GPT4All mode locally!

    About using GPT4All in Python, I have firstly installed a Python virtual environment on my local machine and then installed GPT4All via ‘pip install gpt4all’ command. After that, I’ve tried to run the simple code that you have given and got a strange error:


    Traceback (most recent call last):
    File “F:\model_gpt4all\local_test.py”, line 1, in
    import gpt4all
    File “F:\nlp_llm\lib\site-packages\gpt4all\__init__.py”, line 1, in
    from .gpt4all import GPT4All, Embed4All # noqa
    File “F:\nlp_llm\lib\site-packages\gpt4all\gpt4all.py”, line 13, in
    from . import pyllmodel
    File “F:\nlp_llm\lib\site-packages\gpt4all\pyllmodel.py”, line 140, in
    class LLModel:
    File “F:\nlp_llm\lib\site-packages\gpt4all\pyllmodel.py”, line 253, in LLModel
    ) -> list[float]:
    TypeError: ‘type’ object is not subscriptable

    It seems that there might be a bug in GPT4All library.. Could you please let me know if you have encountered this error before?

    My running environment is:

    OS: Windows 11
    Python: v3.8.17
    GPT4All: v1.0.5

    I am not sure if this was caused by Python version or GPT4All version? Could you please give me some suggestions about this?

    Many thanks!

    • Avatar
      James Carmichael July 19, 2023 at 7:43 am #

      Thank you Dee for the feedback! We will investigate and let you know what we find. In the meantime you may wish to post to StackOverflow to increase likelihood that others can provide insight into the error you are encountering.

  15. Avatar
    Dee July 19, 2023 at 7:02 am #

    Hi John,

    I am also using Windows to run GPT4All on local machine. But I got a very strange error when importing GPT4All in Pycharm: ‘TypeError: ‘type’ object is not subscriptable’. I am wondering if you encountered this error before? My running environment is: Python v3.8.17 and GPT4All v1.0.5. Could you let me the versions of Python and GPT4All that you are using?

    Thanks!

  16. Avatar
    Santos November 16, 2023 at 6:42 pm #

    i was try this its giving me error Help me
    from gpt4all import GPT4All
    model = GPT4All(“orca-mini-3b-gguf2-q4_0.gguf”)
    output = model.generate(“The capital of France is “, max_tokens=3)
    print(output)

    Error: python3 hello.py
    Traceback (most recent call last):
    File “/usr/lib/python3/dist-packages/urllib3/connection.py”, line 169, in _new_conn
    conn = connection.create_connection(
    File “/usr/lib/python3/dist-packages/urllib3/util/connection.py”, line 96, in create_connection
    raise err
    File “/usr/lib/python3/dist-packages/urllib3/util/connection.py”, line 86, in create_connection
    sock.connect(sa)
    TimeoutError: [Errno 110] Connection timed out

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
    File “/usr/lib/python3/dist-packages/urllib3/connectionpool.py”, line 700, in urlopen
    httplib_response = self._make_request(
    File “/usr/lib/python3/dist-packages/urllib3/connectionpool.py”, line 383, in _make_request
    self._validate_conn(conn)
    File “/usr/lib/python3/dist-packages/urllib3/connectionpool.py”, line 1017, in _validate_conn
    conn.connect()
    File “/usr/lib/python3/dist-packages/urllib3/connection.py”, line 353, in connect
    conn = self._new_conn()
    File “/usr/lib/python3/dist-packages/urllib3/connection.py”, line 174, in _new_conn
    raise ConnectTimeoutError(
    urllib3.exceptions.ConnectTimeoutError: (, ‘Connection to raw.githubusercontent.com timed out. (connect timeout=None)’)

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
    File “/usr/lib/python3/dist-packages/requests/adapters.py”, line 439, in send
    resp = conn.urlopen(
    File “/usr/lib/python3/dist-packages/urllib3/connectionpool.py”, line 756, in urlopen
    retries = retries.increment(
    File “/usr/lib/python3/dist-packages/urllib3/util/retry.py”, line 574, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
    urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host=’raw.githubusercontent.com’, port=443): Max retries exceeded with url: /nomic-ai/gpt4all/main/gpt4all-chat/metadata/models2.json (Caused by ConnectTimeoutError(, ‘Connection to raw.githubusercontent.com timed out. (connect timeout=None)’))

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
    File “/home/oci7/Documents/Santos/Python/GPTAPI/hello.py”, line 2, in
    model = GPT4All(model_name=’orca-mini-3b-gguf2-q4_0.gguf’)
    File “/home/oci7/.local/lib/python3.10/site-packages/gpt4all/gpt4all.py”, line 97, in __init__
    self.config: ConfigType = self.retrieve_model(model_name, model_path=model_path, allow_download=allow_download, verbose=verbose)
    File “/home/oci7/.local/lib/python3.10/site-packages/gpt4all/gpt4all.py”, line 149, in retrieve_model
    available_models = GPT4All.list_models()
    File “/home/oci7/.local/lib/python3.10/site-packages/gpt4all/gpt4all.py”, line 118, in list_models
    resp = requests.get(“https://gpt4all.io/models/models2.json”)
    File “/usr/lib/python3/dist-packages/requests/api.py”, line 76, in get
    return request(‘get’, url, params=params, **kwargs)
    File “/usr/lib/python3/dist-packages/requests/api.py”, line 61, in request
    return session.request(method=method, url=url, **kwargs)
    File “/usr/lib/python3/dist-packages/requests/sessions.py”, line 544, in request
    resp = self.send(prep, **send_kwargs)
    File “/usr/lib/python3/dist-packages/requests/sessions.py”, line 679, in send
    history = [resp for resp in gen]
    File “/usr/lib/python3/dist-packages/requests/sessions.py”, line 679, in
    history = [resp for resp in gen]
    File “/usr/lib/python3/dist-packages/requests/sessions.py”, line 237, in resolve_redirects
    resp = self.send(
    File “/usr/lib/python3/dist-packages/requests/sessions.py”, line 657, in send
    r = adapter.send(request, **kwargs)
    File “/usr/lib/python3/dist-packages/requests/adapters.py”, line 504, in send
    raise ConnectTimeout(e, request=request)
    requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host=’raw.githubusercontent.com’, port=443): Max retries exceeded with url: /nomic-ai/gpt4all/main/gpt4all-chat/metadata/models2.json (Caused by ConnectTimeoutError(, ‘Connection to raw.githubusercontent.com timed out. (connect timeout=None)’))

    • Avatar
      James Carmichael November 17, 2023 at 11:05 am #

      Hi Santos…Did you copy and paste the code or did you type it?

      • Avatar
        Santos December 4, 2023 at 6:18 pm #

        please help me

  17. Avatar
    Santos November 24, 2023 at 4:12 pm #

    copy paste

Leave a Reply