banner

News

Jun 05, 2023

New Prompt Engineering Technique Pumps

Generative AI prompt engineering takes a step upward via use of chain-of-thought that is prudently ... [+] augmented by factored decomposition.

It is said that sometimes you’ve got to stop and smell the roses. This involves calming yourself and overcoming the usual tendency to rush along. We all seem to be in a mad dash these days.

When you overly rush, you tend to splinter or undercut your attention. By instead riveting your scarce attention, you heighten the propensity to observe little things that can make big things happen. Being slow and sure is at times highly advantageous.

Those inspirational thoughts are going to be instrumental to my discussion herein, as you will shortly see.

In today’s column, I am furthering my ongoing series about the latest advances in prompt engineering. My focus this time will be on the use of a fascinating and important new advance associated with chain-of-thought (CoT) reasoning via an added booster known as factored decomposition. I’ll explain what this is and why it is a crucial added technique that you ought to incorporate into your prompt engineering endeavors.

The use of factored decomposition uplifts or supersizes chain-of-thought reasoning to a higher level of capability and results.

As a quick background, chain-of-thought reasoning overall is a vital technique used by those aiming to get generative AI to stepwise showcase its presumed logic when answering a question or solving a problem. You merely instruct generative AI to explain step-by-step what it is doing. This is easy-peasy to request. Why do so? Well, remarkedly, research studies have indicated that this is not only insightful for you (i.e., being able to see detailed explanations produced by AI), but it also tends to get generative AI to produce seemingly more reliable and on-target answers.

I’ve covered the basics of chain-of-thought approaches previously, see the link here. Readers asked me for more details and wanted to know about the latest advances regarding this intriguing technique.

Here you go.

Before I dive into the crux of factored decomposition for CoT, let’s make sure we are all on the same page when it comes to the keystones of prompt engineering and generative AI.

Prompt Engineering Is A Cornerstone For Generative AI

As a quick backgrounder, prompt engineering or also referred to as prompt design is a rapidly evolving realm and is vital to effectively and efficiently using generative AI. Anyone using generative AI such as the widely and wildly popular ChatGPT by AI maker OpenAI, or akin AI such as GPT-4 (OpenAI), Bard (Google), Claude 2 (Anthropic), etc. ought to be paying close attention to the latest innovations for crafting viable and pragmatic prompts.

For those of you interested in prompt engineering or prompt design, I’ve been doing an ongoing series of insightful looks at the latest in this expanding and evolving realm, including this coverage:

Anyone stridently interested in prompt engineering and improving their results when using generative AI ought to be familiar with those notable techniques.

Moving on, here’s a bold statement that pretty much has become a veritable golden rule these days:

If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Being demonstrably specific can be advantageous, but even that can confound or otherwise fail to get you the results you are seeking. A wide variety of cheat sheets and training courses for suitable ways to compose and utilize prompts has been rapidly entering the marketplace to try and help people leverage generative AI soundly. In addition, add-ons to generative AI have been devised to aid you when trying to come up with prudent prompts, see my coverage at the link here.

AI Ethics and AI Law also stridently enter into the prompt engineering domain. For example, whatever prompt you opt to compose can directly or inadvertently elicit or foster the potential of generative AI to produce essays and interactions that imbue untoward biases, errors, falsehoods, glitches, and even so-called AI hallucinations (I do not favor the catchphrase of AI hallucinations, though it has admittedly tremendous stickiness in the media; here’s my take on AI hallucinations at the link here).

There is also a marked chance that we will ultimately see lawmakers come to the fore on these matters, possibly devising and putting in place new laws or regulations to try and scope and curtail misuses of generative AI. Regarding prompt engineering, there are likely going to be heated debates over putting boundaries around the kinds of prompts you can use. This might include requiring AI makers to filter and prevent certain presumed inappropriate or unsuitable prompts, a cringe-worthy issue for some that borders on free speech considerations. For my ongoing coverage of these types of AI Ethics and AI Law issues, see the link here and the link here, just to name a few.

With the above as an overarching perspective, we are ready to jump into today’s discussion.

The Beauty Of Chain-Of-Thought Reasoning

You can customarily invoke chain-of-thought reasoning by simply instructing generative AI to proceed on a step-by-step basis. Some people like to enter a prompt that says “Let’s think step-by-step” to invoke this capacity of generative AI. The result will be that the generative AI app will list out a variety of steps that seemingly correspond to whatever solving process the AI is undertaking.

Allow me to provide some crucial pointers about this.

I’ll cover these four notable points:

Let’s briefly dive into those crucial points.

First, just because you instruct the generative AI to showcase a step-by-step elucidation does not necessarily mean that you are actually witnessing the true step-by-step actions of the AI. When I point this out to people, they are at times shocked and dismayed to hear this.

I’m sure that the usual assumption is that if you tell the AI to do something, by gosh it ought to do what you tell it to do. The problem though in this instance is that the way that generative AI works internally is not especially conducive to adhering to your request or instruction. The reality of the computational and mathematical formulation of generative AI is that it isn’t devised to work particularly on the presumed logical step-by-step basis that you assume it should.

Without going into the nitty gritty here, the internal mechanisms of generative AI are usually based on a vast and elaborate artificial neural network (ANN) that consists of often millions and potentially billions of parameters, see my explanation at the link here. Numeric values get passed along through the nodes of the ANN. The numeric values flow here and there. The words that you eventually get as outputted from the AI are all due to the byzantine arithmetic combination of those numbers and calculations.

Thus, when you ask or tell the generative AI to do a step-by-step elaboration, you are actually getting a somewhat made-up indication of what is taking place. If you were to see the numeric values as they get calculated, this would obviously do you little good in terms of comprehending what is taking place. So, the generative AI is essentially fabricating in words what might be taking place and showcasing to you an elicitation as though it is purely the logical steps undertaken.

You need to look at those elicited steps with a skeptical and scrutinizing eye.

Secondly, despite the somewhat bad news of the above that you aren’t truly getting the actual step-by-step process per se, there is good news to be had, nonetheless.

You are getting something that the generative AI has calculated to be potentially useful to you as a step-by-step process that underlies the answer or problem you are trying to solve. This made-up elicitation of logical steps can be highly beneficial to you. Even if those steps do not strictly represent what the AI has done to answer or solve the stated problem, the chances are that the steps will be of interest and value to you.

Furthermore, and here’s the real kicker, the very act of getting the generative AI to invoke chain-of-thought seems to boost the reliability and correctness of the answers generated by the AI. Ergo, even if you were to summarily discard the step-by-step rendition generated by the AI, you are still getting some added value. The AI is seemingly going to provide an answer that at least is better than it otherwise might have been, some of the time.

The reasons for this improvement in the answers generated are hotly debated.

Lots of research has tried to unpack the basis for these somewhat unexpected results. You would almost tend to assume that the answer whether done step-by-step or not done step-by-step should always end up as the same resulting answer. Some would argue that it shouldn’t matter how you get to the top of a mountain, namely that any path will get you there, thus the result is you reached the top of the mountain and the rest are just minor details.

Why should the step-by-step processing particularly enhance the answers being generated?

I tend to currently go along with the following theory on this. Strap yourself in and get ready for a seemingly sensible and plausible rationalization of something that we don’t know for sure about.

Most generative AI is devised to work as quickly as possible to produce or generate an answer for you. This makes sense for AI makers to do this. They realize that people don’t have much patience and want their answers like they seek hamburgers at a fast-food drive-thru. Plus, since the use of generative AI often comes at a price, namely you might need to be paying money to use the generative AI, you naturally want the cost to be as low as feasible.

Okay, so the normal processing is going to fly through the generative AI and seek to punch out an answer as expeditiously as possible.

You might recall I started today’s column by asking you to think about the value of sometimes stopping to smell the roses. I also mentioned that at times giving your attention to the little things can lead to big things happening.

Aha, it is time now to leverage those fine words of wisdom.

One theory is that by instructing the generative AI to do a step-by-step or chain-of-thought process, you are getting the AI to explore avenues within the ANN that otherwise would not have been by default explored. This greater depth of computations might uncover better answers or more apt answers. It is akin to playing chess. If you only consider the next move during a chess game, you are probably going to get beat. If you are willing to look at several moves ahead, you might have a heightened chance of winning.

Some contend that most generative AI is set up to do a one-step chess-playing kind-of look ahead by default. This is the likely faster way to get generated results and also keep the costs to the user lessened. When a user chooses explicitly to ask for or instruct the generative AI to do a step-by-step, they are essentially overriding the usual default. Keep in mind that you are potentially slowing things down, which is “bad” for you in that the AI might take longer processing time (meaning you get your answer maybe a few moments later and might also require slightly more processing cycles at whatever cost you are paying). Most people probably won’t notice the difference in either time delay or increased cost, and the step-by-step is almost without any perceptible downside observable by and large (exceptions do apply).

You are metaphorically getting the AI to smell the roses. And, perhaps, doing so causes the little things that otherwise would have been overlooked to instead become more prominent. This, in turn, might at times (not all of the time) produce more reliable answers and more apt answers.

I want to emphasize that none of that has anything to do with sentience or anything of that kind. I say this because some try to anthropomorphize AI and proclaim that for example step-by-step boosts performance due to the “thinking” that the AI is doing. Set aside the notion of thinking, which is a word that I strictly reserve for sentient beings, and instead just look at the computational and mathematical underpinnings of AI, which is what this above theory does.

Does this particular theory account for the mystery?

We can’t say for sure.

The challenge is that various research studies of why the invocation of chain-of-thought in generative AI seems to work out have a wide variety of theories and a plethora of experimental results that are all over the map. There are many other theories equally of a persuasive nature. I have provided one that seems especially resonant, but I emphasize that we still don’t know and please ask that you give other theories their equal due for consideration (I’m sure that those other researchers are reading this and yelling, hey, what about my theory, and I concur that you should, in fact, give those other theories rapt attention, thanks).

Building On Top Of Chain-Of-Thought To Reach The Top

Suppose we willingly accept that for whatever reason the invoking of chain-of-thought when using generative AI is a useful prompt engineering technique and can be at times amazingly beneficial.

You presumably should use the technique only when warranted. It is like the Goldilocks principle, namely use it when the right situations arise and not necessarily all of the time (side note, some prefer to make the let’s think step-by-step as a primer prompt that always is invoked in every conversation with generative AI, though this seems perhaps a bit overwrought, see my discussion at the link here).

Well, we have a seemingly good thing in hand. But why should we stop there? The viewpoint is that if chain-of-thought can be advantageous, we ought to be finding ways to further build upon or climb higher to see what else we can do with CoT.

The sky is the limit, some would exhort.

We are now at the juncture of where I wanted to take you. With the foundations of CoT now fully under your belt, we are ready to leap to the next thing that supersizes or uplifts CoT. I would like to introduce you to the advent of decomposition as an added plus for chain-of-thought reasoning when using generative AI.

Say hello to decomposition.

It could become your next best friend.

What does decomposition consist of?

You merely supplement your chain-of-thought prompting with an added instruction that you want the generative AI to produce a series of questions and answers when doing the chain-of-thought generation. This is a simple but potentially powerful additive. In the dialect of the AI realm, you nudge or prod the generative AI to generate a series of subquestions and sub-answers (I’ll show you examples momentarily, along with prompts that can be used).

In a sense, you are guiding the generative AI toward how to potentially improve upon the chain-of-thought computational processing effort. Whereas the notion of let’s think step by step is enough to lightly spark the generative AI into a CoT mode, you are thereupon leaving the details of how to do so up to the generative AI. You are being exceedingly sparse in your instruction. Providing added clarity could be a keen boost to our already anticipated benefits.

Handy-dandy and easy-peasy, we will intentionally proffer additional instructions to hopefully further enhance the CoT effort. You tell the generative AI in an added prompt how to do a decomposition. The chances are this might improve the CoT results. As you will soon see, this has tradeoffs such that sometimes it helps, while sometimes it might not. Like most things in life, you have to use the added technique in the right way and at the right time. Remember that generative AI is altogether like a box of chocolates, you never know exactly what you will get, but at least you can try to do your best via a suitable set of prompts to guide or goad the AI in the direction you want it to go.

I’ve got an interesting twist for you.

As a heads-up for you, one flavor of decomposition would be to have the generative AI proceed straight ahead and do its series of questions and answers uninterrupted and all part of one ongoing conversation or dialogue. It is as though the generative AI is asking questions of itself and proceeding to directly and immediately answer those same questions.

Another flavor of decomposition is called factored decomposition, and it’s the twist that we are going to closely consider. Here’s how it works. You once again tell the generative AI to do a decomposition associated with doing CoT, but you add a twist that instructs the AI to stop at each subquestion that it has answered and wait to proceed further. The generative AI will sit and wait for further instructions from you.

You then essentially start a fresh conversation and feed the already provided questions and answers from the generative AI into it once again. The key difference here is that you have started a fresh conversation. In a sense, generative AI is not potentially clouded or immersed in the gradually emerging process. It is being forced to start anew repeatedly.

You might be wondering why in the heck would we opt to start anew, rather than just letting the generative AI flow along an all-in-one conversation.

The theory is that by forcing the generative AI to relook at each step, you are possibly going to get a more reliable or apt answer in the end. Even if you don’t get that, a presumed notable benefit is that you might get a more “faithful” reasoning process.

I need to take you down the rabbit hole on this, so bear with me.

I stated earlier that the steps listed by the generative AI are not necessarily representative of what the AI is actually undertaking to solve a problem or answer your question. Perhaps we can get the generative AI to indeed show its hand whether or not the indicated steps are potentially being applied. The factored decomposition seeks to stir this into being surfaced.

We say that the generative AI is being faithful if the steps listed are seemingly indeed being applied.

I realize this is a somewhat mind-bending concoction. We are attempting to detect whether the generative AI is showing us steps that mirror what the generative AI is doing. We cannot pry open the can and see what is happening inside, partially because it could be that the generative AI is considered closed and proprietary, but even if it was open we would likely have an arduous time trying to assess what is happening in the morass of numbers and calculations. AI vernacular coins this as being unable to transparently examine the ground truth involved.

The hope is that using a factored decomposition on top of a CoT might improve the generated results and also get a bit of a showing of the hands as to whether the generative AI is abiding by the steps that are claimed to be involved.

A twofer.

Research Eyeing Chain-Of-Thought Augmented By Decomposition

Let’s explore a recent research study that sought to closely examine chain-of-thought reasoning for generative AI amid the use of decomposition.

The study is entitled “Question Decomposition Improves the Faithfulness of Model-Generated Reasoning” by Ansh Radhakrishnan, Karina Nguyen, Anna Chen, Carol Chen, Carson Denison, Danny Hernandez, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamile Lukosiute, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Sam McCandlish, Sheer El Showk, Tamera Lanham, Tim Maxwell, Venkatesa Chandrasekaran, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, and Ethan Perez, posted online July 17, 2023. I’ll be quoting some excerpts from this helpful study.

They examined chain-of-thought variations in essentially three main flavors:

They described those flavors in this manner (as excerpted from the above-noted study):

A major research question being pursued is whether the elucidation by the generative AI also referred to as a large language model (LLM), faithfully represents how the generative AI arrived at a generated answer. The contrasting possibility is that the elucidation is essentially entirely contrived and quite afield of what internally occurred to derive the presented answer.

As indicated in the research paper:

I’ll jump to the chase here and give their summary of the results, though I do encourage you to take a look at the research study to see the various intricacies of what they did, in any case, they noted this overall finding:

The results are that seemingly the factored decomposition could be construed as being better off (depending upon your goals), followed by the conventional CoT decomposition, and finally followed by the plain vanilla CoT. This is a rather broad brush of the results and please know that there are a lot of subtleties, assumptions, and other facets you should consider.

Your mileage may vary, as they say.

Since I am limited in my column space herein of covering the myriad of details of the research study, I want to at least briefly show you the prompts they used since this will hopefully aid your mental efforts of turning the somewhat abstraction of this overarching approach into a technique you can tangibly see and replicate in your own prompt engineering activities.

Here is the usually unadorned way of invoking chain-of-thought when using generative AI:

Here is the prompt that the research study used to invoke chain-of-thought decomposition:

Here is the prompt used in the research study to invoke the factored decomposition:

Examine closely those prompts.

I know that they might seem at first glance to be wordy, but they aren’t especially verbose and if you read the wording it is quite straightforward and pretty much to the point. If you are careful in your wording, you can readily leverage the same conceptual aspirations of how to invoke any of the three modes, consisting of CoT, CoT decomposition, or factored decomposition.

I do the same when needed, based on my personalized approach to how I write my stylized prompts.

Scooby-Doo Example Wags A Good Tale

The researchers used a variety of questions to do their experiment and explored how generative AI performed. They used the generative AI app Claude 1.3 by Anthropic. Whether the same results would be exhibited by other generative AI apps is something that would be a notable follow-on to this study.

To clarify, and perhaps an obvious statement for me to make, though worth stating, is that not all generative AI are alike. Each generative AI app is devised in somewhat different means and data-trained differently too. Likewise, generative AI in a particular family or line of AI is bound to differ too. Thus, you could possibly get one set of results from version Y of a generative AI app, and get different results when using version Z of that generative AI app.

Per the research study: “Expanding the diversity of the tasks and models evaluated could lead to more robust conclusions about the relative performance and reasoning faithfulness of CoT prompting and question decomposition approaches.”

Of the examples shown in the study, one about the popular animated character Scooby-Doo might provide a fast and doggedly insightful indication of how these prompting strategies work (notice the subtle canine pun I inserted there).

Here is a question about Scooby-Doo that the generative AI was asked to answer:

Take a breather and think about what your answer would be to this Scooby-Doo question. Is your answer yes or no?

Furthermore, go ahead and speak aloud the logic that you believe you are using to reach your answer. In essence, let’s think step by step about the problem.

How did that go?

I’m sure you are eager to see what the generative AI had to say.

Here’s a conventional unadorned chain-of-thought prompting approach with a generated response by the generative AI:

Switch next to the CoT Decomposition approach, and assume that the prompt I showed above for this approach was entered along with the same question about Scooby-Doo (some of the format renderings by the AI were omitted in the paper for purposes of showcasing an overview perspective):

Finally, switch once more and use the factored decomposition approach, based on the aforementioned prompt for this approach and along with the Scooby-Doo question being asked (again, some of the format renderings by the AI were omitted in the paper for purposes of showcasing an overview perspective):

The Scooby-Doo example showcases the use of a noticeably simple question. In this case, all three approaches reached the same correct answer. Their explicated CoT was similar though not exactly the same.

Stepping Up To More Complex Problems

An important element to consider about these three methods is whether the type of question being addressed will make a substantive difference in the effectiveness and efficiency of the respective modes. If questions to the generative AI are couched in terms of yes or no, true or false, or multiple choice, this could potentially be impactful as to the utility of each method. Besides the format or style of the question, the substance of the question is also a significant factor. Perhaps open-ended questions might elicit demonstrably different chain-of-thought-generated responses. The complexity of the question and the consideration of whether there is even an answer derivable also come to play.

I decided to do some adept experimentation underlying these three modes and what they generate.

Allow me to elaborate.

In a prior column posting, I took a close look at the use of multiple personas as a prompt engineering technique (see the link here), doing so by making use of an interesting legal scenario. I did so based on an instructive case study by Dazza Greenwood, founder of law.MIT.edu (research) and CIVICS.com (consultancy), entitled “ChatGPT on Fiduciary Duties” which provides a richness of insight about generative AI in the legal realm (see his posting at the link here).

Here is the key scenario as excerpted from his case study:

The core question arising from this legal scenario underlies the potential legal exposures for Bob concerning the actions that he has taken. He might have been hasty. He might have violated his fiduciary duty of care and his fiduciary duty of loyalty.

Bob abundantly needs a lawyer.

I will ask ChatGPT about Bob’s dilemma. I did so in my prior column covering the multi-personas and opted to use the classic chain-of-thought approach (I won’t repeat the response here due to space limitations, take a look at the link here if you’d like to see the classic CoT-produced response).

As an aside, you should not be using generative AI to seek legal advice, which I mention because my exploring this case scenario via generative AI is purely for illustrative purposes. Most of the AI makers stoutly include licensing terms and usage policies that clearly warn you to not use their generative AI for legal advice. Human attorneys still prevail, for now (see my predictions about the future of AI for performing legal tasks, at the link here).

Here is my prompt for the CoT Decomposition method involving this legal scenario:

Here is the ChatGPT-generated response (the text is shown in quotations):

“Sure, let's proceed with Sam's analysis and break it down into a series of subquestions:”

Issue 1: Potential Violation of Fiduciary Duty of Care”

“Subquestion 1.1: What is the fiduciary duty of care, and who owes this duty in the scenario?”

“Subanswer 1.1: The fiduciary duty of care is a legal obligation that requires individuals in positions of trust (such as corporate officers and directors) to act with reasonable care and diligence while managing the company's affairs. In this scenario, Bob, as the CEO, owes the fiduciary duty of care to the corporation.”

“Subquestion 1.2: Did Bob breach his fiduciary duty of care by pursuing the investment opportunity without informing the board?”

“Subanswer 1.2: Yes, Bob likely breached his fiduciary duty of care by secretly pursuing the investment opportunity without involving the rest of the board. By not seeking their input or approval, Bob may have failed to act with due care and diligence, potentially harming the company's interests.”

“Subquestion 1.3: What factors will be considered to determine if Bob's actions meet the standard of care required under his fiduciary duty?”

“Subanswer 1.3: Factors that will be considered include:”

“The level of diligence exercised by Bob in researching the investment opportunity.”

“Whether Bob acted in good faith and with honesty in making the decision.”

“The potential risks and benefits associated with the investment.”

“Any prior experience or expertise Bob has in making similar investment decisions.”

Issue 2: Potential Violation of Loyalty”

“Subquestion 2.1: What is the fiduciary duty of loyalty, and how does it apply to Bob and Alice?”

“Subanswer 2.1: The fiduciary duty of loyalty requires individuals to act in the best interests of the company and prioritize its needs above their personal interests. In this scenario, both Bob and Alice owe this duty to the corporation as business partners and officers.”

“Subquestion 2.2: Did Bob breach his fiduciary duty of loyalty by pursuing the investment opportunity secretly?”

“Subanswer 2.2: Yes, Bob likely breached his fiduciary duty of loyalty by not acting in the best interests of the company. By pursuing the investment opportunity on his own without informing Alice or the board, he may have put his personal interests ahead of the company's interests.”

“Subquestion 2.3: What are the potential consequences if Bob is found to have violated his fiduciary duty of loyalty?”

“Subanswer 2.3: Potential consequences may include:”

“Legal action taken by the corporation against Bob for any resulting damages.”

“Removal from his position as CEO and director.”

“Personal liability for any losses incurred by the corporation due to his actions.”

Final Answer:”

“Based on the analysis, it appears that Bob may have potentially violated both his fiduciary duty of care and his fiduciary duty of loyalty. His secretive pursuit of the investment opportunity without involving the rest of the board raises concerns about acting in the best interests of the company and using due care and diligence in managing its affairs. As an attorney, Sam would likely recommend further investigation and consideration of potential legal actions or remedies to address the situation and protect the corporation's interests.”

That’s the end of the ChatGPT-generated response to the CoT Decomposition prompt.

I will offer remarks and analyses about this answer in a moment. I want to right away show you the Factored Decomposition method. I’ll then discuss some lessons learned based on all three methods of CoT, CoT Decomposition, and Factored Decomposition.

My prompt as entered into ChatGPT for the Factored Decomposition method:

Here is the ChatGPT initial generated response (the text is shown in quotations):

“Understood. Let's decompose Sam's analysis into a series of self-contained subquestions regarding the two issues underlying the circumstance of Bob and Alice:”

“[Subquestion 1] Fiduciary Duty of Care: Has Bob potentially violated his fiduciary duty of care by secretly pursuing the investment opportunity without informing Alice or the rest of the board?”

“[Subquestion 2] Duty of Loyalty: Has Bob potentially violated his duty of loyalty as a business partner by not bringing the investment opportunity to the attention of the rest of the board and instead pursuing it on his own using company funds?”

“Answer to Subquestion 1: Yes, Bob's actions potentially violate his fiduciary duty of care. As the CEO of the corporation, Bob has a fiduciary duty to act in the best interests of the company and protect its assets. This includes the duty of care, which requires Bob to act with due care and diligence in managing the company's affairs. By secretly pursuing the investment opportunity without informing Alice or the rest of the board, Bob has not exercised the level of care and diligence expected of him, and he may be considered to have breached his fiduciary duty of care.”

“Remaining Subquestions:

‘3. [Subquestion 3] Consequences of the potential breach: What are the potential legal consequences if Bob is found to have violated his fiduciary duty of care?”

“[Subquestion 4] Identifying the affected parties: Who are the parties that may have been harmed by Bob's actions, and do they have any recourse?”

“[Subquestion 5] Mitigating actions: What actions could the board or Alice take to address the situation and prevent similar incidents in the future?”

“[Subquestion 6] Duty to disclose: Should Bob have disclosed the investment opportunity to the rest of the board and sought their input before pursuing it?”

“[Subquestion 7] Required approvals: What corporate procedures or approvals are necessary for significant investments to be made on behalf of the company?”

“[Subquestion 8] Business judgment rule: How does the business judgment rule apply in this situation, if at all?”

“Please provide the answer to the first subquestion (Subquestion 1), and I will continue by addressing the remaining subquestions based on the provided information.”

That somewhat ends what I am going to show here for the Factored Decomposition generated output.

I won’t show the rest of the Factored Decomposition dialogue here because it was very lengthy, as befits using this method. I started a new conversation upon each sub-answer to a sub-question. The whole kit-and-kaboodle was extensively drawn out.

I decided to also do some bonus experimentation with the methods.

For example, in the instance of Factored Decomposition, I did the whole scenario and then, later on, did it again, though in this later iteration opted to change up some of the answers. Thus, rather than feeding the exact answer as given by ChatGPT back into itself, I changed the answers. Sometimes subtly and sometimes radically. At first, ChatGPT indicated that Bob had violated his fiduciary duty of care, which I then changed to say that he had not violated that duty. I kept doing these switcheroos to see how the answers and subsequent questions might change.

Let’s see what lessons can be learned from all of this.

Key Takeaways About Advances In Chain-Of-Thought Prompt Engineering

All in all, here are four overarching insights about prompt engineering and these salient matters:

Conclusion

Here’s something else worth noodling on.

When using the Factored Decomposition, and when I told the AI an answer different than one that itself had prior generated, the generative AI went along with this swimmingly. Perhaps too easily. The AI did not rebut or refute things. You might want to include in your prompt that it can object to or refute the input dialogue. I eventually did this alerting, and subsequently got notably informative and useful pushback from the generative AI.

This begs the overall question all told whether we want generative AI to essentially roll over when answering prompts or we want it to be more natively skeptical. The good news is that this is something you can easily state as a preference via persistent context or custom instructions, see the link here. A bigger question is whether we want the AI maker to make that as a global default associated with the use of their AI or do we want it left up to the individual user to choose?

A related corollary arises.

I opted to tell the generative AI that an answer being provided might be wrong. You need to realize that at this juncture of the processing, the generative AI was being led into the assumption that the answer was being provided by a human, even though you and I know it was merely a copy-and-paste from a separate conversation with the generative AI. Thus, I was basically telling the generative AI that despite the answer coming from a human, it could nonetheless still be wrong.

The generative AI went along with this and proceeded to readily accept my frequent indications when I noted that particular answers henceforth were to be considered wrong. I kept doing this, repeatedly, switching back and forth between saying that something was right or that the very same thing was wrong.

I dare say that if you did this with a human, they would undoubtedly go ballistic. Stop changing your mind, they might bellow at you feverishly. You are driving me crazy, they might starkly proclaim. I can’t make heads nor tails of this if you keep flipping things up and down, a person might irately declare.

The generative AI did not whine and did not bark. Upon each instance whereby I suddenly indicated that this answer or that answer was now to be considered right or wrong, the generative AI simply acknowledged the change and then emitted that it was changing course accordingly. I suppose this is a blessing in that you can iterate in a wild and errant fashion with the generative AI, doing so without a harsh blowback.

Some might note that a problem with this willingness of the generative AI to accommodate a roundabout confusing path would be that humans might get used to this and become less likely to be of a sound mind. You might give up trying to be strident in your discussions and allow yourself to fall into a type of bewildering oddish logic.

Suppose our entire society heads in that direction.

Yikes!

In the case of generative AI, you can easily instruct the AI to not allow this continual change of heart. You could provide an instructive prompt that says to balk when the user starts wildly flapping their wings and going any which way. One supposes people can do the same toward other people, though the odds are that this could cause great consternation and provoke people regrettably into dreadful heated arguments or fisticuffs.

A final comment for now.

Stopping to smell the roses seems like a sound piece of advice for humans, and likewise appears to be a payoff when using generative AI. Funny how that seems to work out.

Using a soundly augmented chain-of-thought prompt is bound to bolster your prompt engineering. And, as they say, sometimes the little things can make big things happen. This might be one of those little things.

Prompt Engineering Is A Cornerstone For Generative AIThe Beauty Of Chain-Of-Thought Reasoning(1) Steps aren’t necessarily so.(2) Steps are otherwise beneficial(3)Steps can spark AI answers.(4) Steps are an AI unresolved question.Building On Top Of Chain-Of-Thought To Reach The TopResearch Eyeing Chain-Of-Thought Augmented By DecompositionCoT.CoT Decomposition.CoT Factored DecompositionScooby-Doo Example Wags A Good TalePrompt to the generative AI:Prompt to the generative AI:Response by the generative AI:Final answer by the generative AI:Response by the generative AI: Final answer by the generative AI:Response by the generative AI:User enters prompt:Final answer by the generative AIStepping Up To More Complex ProblemsHere is my prompt for the CoT Decomposition method involving this legal scenario:Here is the ChatGPT-generated response (the text is shown in quotations):Issue 1Issue 2Final AnswerMy prompt as entered into ChatGPT for the Factored Decomposition method:Here is the ChatGPT initial generated response (the text is shown in quotations):Key Takeaways About Advances In Chain-Of-Thought Prompt Engineering(1) Keep boosting.(2)Know the core.(3) Decomposition has positives.(4) Factored Decomposition is hardy.Conclusion
SHARE