Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Fixed the issue of the model not giving perfect responses by changing the URL to the parent URL directory of Hyperledger.
  • Added functionality allowing the model to respond to documents.
  • Tested different models under the Apache 2.0 License and found that Mistral performs better with some tuning of its hyperparameters.
  • The model produced extra output of the context and system prompts, which I fixed by filtering only the model's output.
  • Currently, I am trying to add the functionality to stop the model from generating and to update the previous prompt, similar to ChatGPT.
  • This is the current version after modifying the code of AI-FAQ. (I have uploaded my resume for context in this example).
  • ScreenShot:
  • I have made the basic React app with basic API endpoints from Django backend. This is how it looks. (still in progress)
    Image Added
  • I also rented a GPU 3090 24 GB of VRAM to run the model on it, Through ollama I am getting very good results, and the response from the model is very fast. I am currently trying the Gradio app to run on it to see its response time, and once everything's ok, I will integrate this in the backend.

My suggestions:

  • Use the Giksard module to test the quality of responses generated by the model.
  • Use the Django framework, as it provides ORM which would be useful for storing user data and conversations. Additionally, since the model is in Python, this framework would be more suitable.

...