Post History
It seems like what you are hinting at is the degree to which an instruction contains the context required to understand it, answer it, and evaluate the answer. Moreover, the question hints at an ob...
Answer
#2: Post edited
- It seems like what you are hinting at is the degree to which an instruction contains the context required to understand it, answer it, and evaluate the answer. Moreover, the question hints at an objective instruction being one that is almost completely self-contained in these aspects.
In the example you gave (Capitalize all letter S characters in a sentence), the prompt contain all the information about the subject acted upon, and, presumably, the computer the program is running on has the concept of characters/letters explicitly encoded into its operating system, including the concept of capitalization.- In this sense, LLM prompts already represent a small subset of natural language instructions. If we were in the same room, and I pointed to an object and told you to hand it to me, the instruction could be considered objective if we can both see the object, but it isn’t self-contained in the way it would need to be for an LLM because of the lack of shared context (assuming the LLM can’t see).
- All this to say, I think the information you are looking for may be _Contextual_ vs. _Semantic_ in nature.
- It seems like what you are hinting at is the degree to which an instruction contains the context required to understand it, answer it, and evaluate the answer. Moreover, the question hints at an objective instruction being one that is almost completely self-contained in these aspects.
- In the example you gave (Capitalize all letter S characters in a sentence), the prompt contains all the information about the subject acted upon, and, presumably, the computer the program is running on has the concept of characters/letters explicitly encoded into its operating system, including the concept of capitalization.
- In this sense, LLM prompts already represent a small subset of natural language instructions. If we were in the same room, and I pointed to an object and told you to hand it to me, the instruction could be considered objective if we can both see the object, but it isn’t self-contained in the way it would need to be for an LLM because of the lack of shared context (assuming the LLM can’t see).
- All this to say, I think the information you are looking for may be _Contextual_ vs. _Semantic_ in nature.
#1: Initial revision
It seems like what you are hinting at is the degree to which an instruction contains the context required to understand it, answer it, and evaluate the answer. Moreover, the question hints at an objective instruction being one that is almost completely self-contained in these aspects. In the example you gave (Capitalize all letter S characters in a sentence), the prompt contain all the information about the subject acted upon, and, presumably, the computer the program is running on has the concept of characters/letters explicitly encoded into its operating system, including the concept of capitalization. In this sense, LLM prompts already represent a small subset of natural language instructions. If we were in the same room, and I pointed to an object and told you to hand it to me, the instruction could be considered objective if we can both see the object, but it isn’t self-contained in the way it would need to be for an LLM because of the lack of shared context (assuming the LLM can’t see). All this to say, I think the information you are looking for may be _Contextual_ vs. _Semantic_ in nature.