1. What are the keyboard shortcuts on a Mac running Parallels that correspond to Ctrl+Shift+F12 and Ctrl+Shift+Alt+F12 on a Windows computer used in CotranslatorAI?
Command+Shift+F12 (identical to Ctrl+Shift+F12) and Command+Option+Shift+F12 (identical to Ctrl+Shift+Alt+F12).
2. Why does CotranslatorAI replace some special characters with a question mark code (�) when loading prompt files?
CotranslatorAI requires specific file encodings to read and display special characters correctly. If you are experiencing issues with diacritical marks or non-Latin characters such as ë, à, é, ă, etc., being replaced with �, it might be due to an incompatible file encoding.
To ensure that CotranslatorAI reads your prompt files correctly, please save your files with one of the following encodings:
- UTF8 with Signature
- UTF8 without Signature (often simply referred to as “UTF8” in text editors)
- Unicode (may also be labeled as “Unicode LE” or “UTF16”)
When using Notepad or similar text editors, you might be able to save your file with an encoding like “Central European.” However, this encoding is not stored within the file itself. When you open the file again, Notepad assumes the default system encoding, which may or may not match the encoding used to create the file. As a result, special characters may not display correctly, especially if the file is opened on a system with a different default encoding.
It is advisable to use UTF8 or Unicode (UTF16) encodings because they are more universally compatible. UTF8 is efficient as it uses a variable number of bytes for characters, while Unicode (UTF16) uses 2 bytes for each character. For greater compatibility, you may choose “UTF8 with Signature,” which includes special bytes at the beginning of the file to indicate its encoding to the text editor.
By using one of the recommended encodings, you can avoid issues with special characters being replaced by the question mark code and ensure that your prompt files are displayed correctly in CotranslatorAI.
Note: Typing the prompt directly into CotranslatorAI or overwriting the � characters with the correct ones inside CotranslatorAI should also work. The issue mainly arises when reading special characters from prompt files saved with incompatible encodings.
3. How does CotranslatorAI maintain context when sending content to the AI through an ad-hoc prompt?
CotranslatorAI maintains context by sending the entire chat history beginning with the latest standard prompt (in CotranslatorAI, this means a prompt without context of previous interactions) when an ad-hoc prompt is sent. This is because the OpenAI GPT model is stateless and does not remember any context from previous user requests.
For example, if you have sent two standard prompts and then decide to send an ad-hoc prompt, you will be sending the second normal prompt, the AI’s response to it, and the ad-hoc prompt. If you decide to send another ad-hoc prompt afterwards, without sending a standard prompt first, then you will be sending the second standard prompt, the AI’s response to it, the first ad-hoc prompt, the AI’s response to the first ad-hoc prompt, and the second ad-hoc prompt.
When you send an ad-hoc prompt, you are not sending anything which is currently in the Instructions and Segment windows. As the context, you are only sending the chat history which is in the chat grid. So, whenever you send an ad-hoc prompt, it does not matter what is in the Instructions and Segment windows. Clearing those windows will not have any effect on what you send to ChatGPT via the ad-hoc prompt.
As for ChatGPT via the browser, we don’t really know what happens. Maybe you send the entire contents of the current chat. Maybe you send N number of previous prompts and responses. This is their internal implementation. The only way to understand is to gauge the exact token statistics of a specific chat session. This might not be possible for the browser-based ChatGPT.