Editing Rietenpetardat

Jump to navigation Jump to search

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.

Latest revision Your text
Line 6: Line 6:
| prefix = Mention
| prefix = Mention
| owner = [[User:Edward|Edward]]
| owner = [[User:Edward|Edward]]
| language = Python
| language = C++, JavaScript
| sourcecode = https://github.com/Edward205/markov-discord-bot (old version)
| sourcecode = https://github.com/Edward205/markov-discord-bot (old version)
| invitelink = Not available to public
| invitelink = Not available to public
Line 16: Line 16:


== Commands ==
== Commands ==
The following refers to the latest version of the rietenpetardat as of 2nd of October 2023.  
The following refers to the latest version of the rietenpetardat as of 11th of August 2023.  


* '''/intreaba''' <question> - Query a language model instructed to answer questions
* '''/gptconfig''' - Prints the current configuration of the GPT model
* '''/gptconfig''' - Prints the current configuration of the GPT model
* '''/identitate''' <identity> - Sets the identity to the paramater (only for moderators)
* '''/identitate''' <identitate> - Sets the identity to the paramater (only for moderators)
* '''/parametru''' - Sets a paramater from the list below (only for moderators)
* '''/parametru''' - Sets a paramater from the list below (only for moderators)
** '''modgen''' - Selects the way which a message is generated
** '''modgen''' - Selects the way which a message is generated
Line 72: Line 71:
Chatbots are complex programs, requiring a high degree of analysis and processing of text data. One approach that would eliminate our dependence on an external service and generate coherent conversations would be to train an LLM with just the OKPR chat data. This could be trained from scratch or by fine-tuning.  
Chatbots are complex programs, requiring a high degree of analysis and processing of text data. One approach that would eliminate our dependence on an external service and generate coherent conversations would be to train an LLM with just the OKPR chat data. This could be trained from scratch or by fine-tuning.  


All generation-related code until The Return, with the exception of GPT-3 and ChatGPT, was written in C++ so as to be as fast as possible and linked to [https://discord.js.org/ discord.js] via the stdin and stdout interface.
All generation-related code, with the exception of GPT-3 and ChatGPT, was written in C++ so as to be as fast as possible and linked to [https://discord.js.org/ discord.js] via the stdin and stdout interface.


The training data for the algorithms or LLMs is an archive of text messages from the #bucuresti-general channel on OKPR v2, taken one day before it was deleted (see [[OkPrietenRetardat/Timeline|Timeline]]). Additionally, another archive of the #bucuresti-general channel from OKPR v3 was taken which will also be used. These archives are not publicly available.
The training data for the algorithms or LLMs is an archive of text messages from the #bucuresti-general channel on OKPR v2, taken one day before it was deleted (see [[OkPrietenRetardat/Timeline|Timeline]]). Additionally, another archive of the #bucuresti-general channel from OKPR v3 was taken which will also be used. These archives are not publicly available.
Please note that all contributions to Irony Wiki are considered to be released under the Creative Commons Attribution-ShareAlike (see Irony Wiki:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!
Cancel Editing help (opens in new window)