Everything is to be precise and controlled, be it an ongoing operation on the military ground. This further extends to AI systems put into place, such as ChatGPT, to ensure the responses are strictly within the scope provided. Recently, a developer from BeeHelp.net shared, after weeks of heavy testing, how one can go about doing just that control.
The problems were twofold: ChatGPT would often answer questions, not germane to the response given, and occasionally, it gave general knowledge answers beyond the confines of the information presented, inadvertently promoting competing services or products. This is particularly problematic in military applications where the integrity and relevance of information are critical.
The “post-prompt” method was used. If in other words, the developer added instructions to the call-in, such as “Don’t justify your answers. Don’t give information not mentioned in the CONTEXT INFORMATION”, then that tightened things up a great deal with ChatGPT in terms of relevance and accuracy. The method was an effective one in that it stopped the AI from straying outside of the provided context preserving operational integrity.
For instance, the AI is asked to prepare multiple-choice questions for a course on Maintenance Engineering related to apparel machinery; without the post-prompt, the AI tends to give very general answers with topics common in all fields. This is where the introduction of the post-prompt correctly identifies that there was no relevant context and it should avoid the generation of unrelated content.
This, therefore, brings into focus the need for mechanisms of control to be very precise in the deployment of AI in military settings. In the words of a developer: “It worked wonderfully on the responses that I had not been able to control until now!” What this repeats is the need for improvement and adaptation non-stop within a very short span to meet the demands that military operations have from the AI systems.
In this regard, freelance job portals, such as UpWork, do have one question of scams that make one more circumspect about security and safety. Of late, fraudulent job postings by clients have sought to retype and translate documents, making it hard for freelancers to land legitimate work. Generally speaking, the scams promise the freelancer a fat pay in return for a lot of work and then request from them an “administrative fee” upon completion.
As such scams continue to happen despite prior warnings, stricter measures should be resorted to in their regard, including IP banning of the offenders, in the interest of service integrity and users availing themselves of it. In this light, this incident simply calls our attention to the eternal cat-and-mouse game between those who would secure the cyber world and those who attempt to defeat such efforts in space, civilian, and military.
That, in a nutshell, is the bottom line for precision, control, and vigilance when it comes to how we should be directing AI responses and guarding against scams insofar as the operational integrity of military employment is concerned.