This release introduces several exciting enhancements. The chat app now features a sidebar for conversation history, new chats, and settings, along with helpful tooltips. Additionally, local models are now supported using ollama, and the Perplexity Service offers various models like llama-3-sonar and mixtral-8x7b. Cohere Service, with models such as command and command-light, is also available. Internally, there are improvements, bug fixes, and quality-of-life enhancements.
We are happy to announce that we now support local models with ollama. By default we look for the ollama host in http://localhost:11434
but this can be customized by setting up the OLLAMA_HOST
environmental variable. Be aware that you are in charge of maintaining your own ollama installation and models.
Perplexity AI now offers a wide range of models as part of their service. The current version includes the following models: llama-3-sonar-small-32k-chat, llama-3-sonar-small-32k-online, llama-3-sonar-large-32k-chat, llama-3-sonar-large-32k-online, llama-3-8b-instruct, llama-3-70b-instruct, mixtral-8x7b-instruc". See Perplexity API documentation for more information on these models.
Cohere is now available as another service. The current version includes the following models: command, command-light, command-nightly, and command-light-nightly. See Cohere’s docs for more on these models and capabilities.
SSEparser::SSEparser
). This doesn’t affect how the users interact with the addins, but avoids a wider range of server errors.{lintr}
for keeping code consistency.{gptstudio}
now requires {bslib}
v0.6.0 or greater, to take advantage of the sidebar styling.gpstudio_sitrep()
has been added to help with debugging and setup.We’ve introduced a configuration file that persists across sessions. Now, your preferred app settings will be loaded each time you launch the app, making it even more user-friendly.
Further enhancing customization, we’ve added a “task” option that lets you choose the system prompt from options such as “coding”, “general”, “advanced developer”, and “custom”. The “custom” option allows you to replace the system prompt instructions entirely.
We’re excited to announce that our service now includes models from HuggingFace’s inference API, Anthropic’s claude models, and Google’s MakerSuite, and Azure OpenAI service broadening the range of AI solutions you can use.
In an effort to make future API additions easier, API calls now use S3 classes.
Inspired by Edgar Ruiz’s work on chattr, we’ve implemented real-time streaming without relying on R6, but this will receive more attention in the 0.4.0 release.
The ChatGPT add-in now comes with an integrated model selection feature, enabling you to choose any chat completion model that matches either gpt-3.5 or gpt-4 in the model name.
The add-ins for code commenting and spelling & grammar checking have been upgraded to use the chat/completions endpoint and now default to the gpt-3.5-turbo model. You can modify this default setting as needed.
You now have the option to specify a different base url for the OpenAI API. A much-requested feature by our users, this addition helps in tailoring the API access to suit your needs.
We’ve addressed several issues in this update. Now, the “Spelling and Grammar” and “Comment your code” add-ins can successfully insert text in source. Also, installation issues related to the {stringr} package and compatibility with earlier versions of R have been resolved.
To ensure optimal user experience, we’re now using GitHub Actions to check compatibility with a wider range of R versions on Ubuntu.
We hope you enjoy the enhanced features and improved performance in this latest version. As always, your feedback is invaluable to us, so please keep it coming!
The ChatGPT addin can now speak German! Thanks to Mark Colley #107
The ChatGPT addin can now receive translations. If anyone wants to contribute with a new translation only needs to edit the translation file (“inst/translations/translation.json”). Currently supported languages are English and Spanish.
{httr2}
The requests are now handled with httr2 functions. This provides a more intuitive way to extend the functionality of the package, meaning that new request parameters to any endpoint are one pipe away from being implemented.
Instead of waiting for the full response to be received before showing it to the user, the chat app now streams the response generation in real time. This makes for shorter wait times and removes the need to use {waiter}
.
Each individual message is now rounded and has an icon indicating whether it comes from the user or from the assistant. Each role has a different horizontal alignment and a slightly different background color.
The prompt and buttons have been simplified to give the chat more room to expand.
Now the app has a settings button where the user can still choose its skill level and preferred style.
When the app starts (or history is cleared) the assistant greets the user with a random welcome message and instructions on how to use the app.
Limited to 800px width. The prompt input is always fixed to the bottom of the app.
This makes it look more integrated with the IDE, giving the feel of what an extension does in VScode.
Every code chunk now has on top a bar indicating the language of the code displayed and a “Copy” button. When the user clicks the button writes the code in the clipboard and shows a short “Copied” feedback in the button.
The app uses now a narrower grey scroll bar.
NEWS.md
file to track changes to the package.