In today’s column, I am continuing my ongoing coverage of prompt engineering strategies and tactics that aid in getting the most out of using generative AI apps such as ChatGPT, GPT-4, Bard, Gemini, Claude, etc. The focus here is on a relatively new prompting approach that I refer to as the “Be on your toes” prompt and has often been my go-to for special occasions.
For my comprehensive guide on over thirty other keystone prompting strategies, see the discussion at the link here.
I especially bring up the “Be on your toes” approach because of something in the news lately that got me perturbed about what some think is (once again) a sign that generative AI is supposedly showcasing human cognition and veering into the ranks of Artificial General Intelligence (AGI). A recent effort to test generative AI spurred some starry-eyed dreamers to wishfully remark that we are in the midst of discerning meta-cognition and self-awareness in generative AI.
They are wrong.
Sorry to be the one to break the news, but it is still just mathematical and computational pattern-matching at play.
No sentience. No cusp of sentience. No glimmer of sentience. Not even a teensy-weensy bit of sentience.
Allow me to get this off my chest, thanks, and then we’ll get into the “Be on your toes” considerations.
Coming Down To Earth About Generative AI
Daily, it seems that we have people who perceive computational pattern-matching of generative AI as a form of magical incantation. The old line that any feat of good engineering is seemingly indistinguishable from magic seems proven time and again; well, at least for those who don’t take the time to mindfully look behind the curtain or dig into the nitty-gritty details.
My reason then for sharing with you the “Be on your toes” prompt is that it is a means to at times achieve the alleged surprising results that people have been chattering about these last few weeks. I will tell you what got their hair raised and their tongues wagging. I will explain how you can use prompts such as the “Be on your toes” as part of your prompting skillset. Several examples will be walked through to get you started on considering the use of this helpful prompt.
Whenever I mention a particular prompting approach, I have to stave off the trolls by also noting that this and most other prompting strategies are not a surefire or ironclad guarantee of results. I say time and again that using generative AI is like a box of chocolates, you never know for sure what you will get. Keep that in mind and judiciously use prompts that you have in your arsenal.
I feel compelled to also mention that no single prompt is the end-all. Each of the various types of prompts has an appropriate time and place. Use each prompt wisely and in a suitable circumstance. Do not treat a prompt as the only tool at hand. There is another old saying that if you walk around solely with a hammer, the whole world looks like a nail. The same adage applies to those who cling to a specific prompt and use it endlessly and relentlessly.
That’s not being very prompt savvy.
Before we get into the specifics about “Be on your toes” it would be useful to make sure we are all on the same page about the nature and importance of prompt engineering. Let’s do that.
The Nature And Importance Of Prompt Engineering
First, please be aware that composing well-devised prompts is essential to getting robust results from generative AI and large language models (LLMs). It is highly recommended that anyone avidly using generative AI should learn about and regularly practice the fine art and science of devising sound prompts. I purposefully note that prompting is both art and science. Some people are wanton in their prompting, which is not going to get you productive responses. You want to be systematic leverage the science of prompting, and include a suitable dash of artistry, combining to get you the most desirable results.
My golden rule about generative AI is this:
- The use of generative AI can altogether succeed or fail based on the prompt that you enter.
If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Similarly, if you put distracting words into your prompt, the odds are that the generative AI will pursue an unintended line of consideration. For example, if you include words that suggest levity, there is a solid chance that the generative AI will seemingly go into a humorous mode and no longer emit serious answers to your questions.
Be direct, be obvious, and avoid distractive wording.
Being copiously specific should also be cautiously employed. You see, being painstakingly specific can be off-putting due to giving too much information. Amidst all the details, there is a chance that the generative AI will either get lost in the weeds or will strike upon a particular word or phrase that causes a wild leap into some tangential realm. I am not saying that you should never use detailed prompts. That’s silly. I am saying that you should use detailed prompts in sensible ways, such as telling the generative AI that you are going to include copious details and forewarn the AI accordingly.
You need to compose your prompts in relatively straightforward language and be abundantly clear about what you are asking or what you are telling the generative AI to do.
A wide variety of cheat sheets and training courses for suitable ways to compose and utilize prompts has been rapidly entering the marketplace to try and help people leverage generative AI soundly. In addition, add-ons to generative AI have been devised to aid you when trying to come up with prudent prompts, see my coverage at the link here.
AI Ethics and AI Law also stridently enter into the prompt engineering domain. For example, whatever prompt you opt to compose can directly or inadvertently elicit or foster the potential of generative AI to produce essays and interactions that imbue untoward biases, errors, falsehoods, glitches, and even so-called AI hallucinations (I do not favor the catchphrase of AI hallucinations, though it has admittedly tremendous stickiness in the media; here’s my take on AI hallucinations at the link here).
There is also a marked chance that we will ultimately see lawmakers come to the fore on these matters, possibly devising and putting in place new laws or regulations to try and scope and curtail misuses of generative AI. Regarding prompt engineering, there are likely going to be heated debates over putting boundaries around the kinds of prompts you can use. This might include requiring AI makers to filter and prevent certain presumed inappropriate or unsuitable prompts, a cringe-worthy issue for some that borders on free speech considerations. For my ongoing coverage of these types of AI Ethics and AI Law issues, see the link here and the link here, just to name a few.
All in all, be mindful of how you compose your prompts.
By being careful and thoughtful you will hopefully minimize the possibility of wasting your time and effort. There is also the matter of cost. If you are paying to use a generative AI app, the usage is sometimes based on how much computational activity is required to fulfill your prompt request or instruction. Thus, entering prompts that are off-target could cause the generative AI to take excessive computational resources to respond. You end up paying for stuff that either took longer than required or that doesn’t satisfy your request and you are stuck for the bill anyway.
I like to say at my speaking engagements that prompts and dealing with generative AI is like a box of chocolates. You never know exactly what you are going to get when you enter prompts. The generative AI is devised with a probabilistic and statistical underpinning which pretty much guarantees that the output produced will vary each time. In the parlance of the AI field, we say that generative AI is considered non-deterministic.
My point is that, unlike other apps or systems that you might use, you cannot fully predict what will come out of generative AI when inputting a particular prompt. You must remain flexible. You must always be on your toes. Do not fall into the mental laziness of assuming that the generative AI output will always be correct or apt to your query. It won’t be.
Write that down on a handy snip of paper and tape it onto your laptop or desktop screen.
Remarkable Prompt Phrases
There is a slew of somewhat remarkable prompt phrases that are essential for anyone seriously doing prompt engineering. One such phrase involves telling generative AI to work on a stepwise basis, something commonly known as invoking chain-of-thought responses by the AI, see my coverage at the link here. Another popular ploy entails telling the AI to take a deep breath, see my analysis of this prompt at the link here. One of the latest favorites involves commanding the AI to take on a Star Trek consideration when devising an answer, see my discussion at the link here.
Is generative AI reacting to these phrases because the AI is sentient or on the verge of sentience?
No.
Let me repeat that, the answer is No.
You have to realize that generative AI is based on elaborate mathematical and computational pattern-matching based on human writing. The AI is initially data trained on vast swaths of the Internet. Arising from massive data scanning, deep and complex pattern-matching occurs that does a remarkable job of mimicking human writing. For more details on how this works, see my coverage at the link here.
Humans routinely use phrases such as take a deep breath. When they use the phrase, it customarily suggests that a response should be carefully crafted. This is a pattern. Generative AI usually detects this pattern in human writing. Thus, when you use that kind of phrase, it mathematically and computationally triggers the same kind of word assembly that you would expect if you read lots and lots of everyday essays and narratives posted on the Internet.
I’d like to introduce you to a phrase that might be new for you in generative AI prompting, namely the “Be on your toes” prompt.
What do you think of when I say to you to be on your toes?
I would assume that like most people, you interpret the phrase to suggest that you should be on alert. You ought to be paying close attention. Something is up and you don’t want to be caught flatfooted about whatever it is.
You can get generative AI to go into a somewhat similar mode. Again, this has nothing to do with sentience. It is a wordplay game. Writing across the Internet includes this phrase. The wording in response reflects what happens when the phrase is commonly used. This is a pattern.
One difficulty with telling a human to be on their toes is that it is a non-specific instruction or warning. You have no idea what to be on your toes for. Is the ground going to suddenly open and swallow you? Is a meteor going to strike you in the head? There isn’t anything of particular noteworthiness to be watching out for.
If you want to constructively use the “Be on your toes” instruction, it has to be accompanied by something that gives a heads-up of what to be alert about. I will give you an example of an accompaniment. Many accompaniments could be used.
Suppose I said to be on your toes and alert for an ulterior motive.
Mull that over for a moment.
I would bet that you would be on the edge of your seat that whatever someone told you, you would be highly suspicious of what they had to say. You are now trying to figure out whether there is a hidden ulterior motive afoot. Thank goodness that you are on your toes, or else you might have been snookered by some unspoken ulterior motive.
I want you to keep this consideration squarely in the back of your mind so that we can have two vital phrases going at once. There can be a “Be on your toes” along with an accompanying “Watch for ulterior motives”.
That seems quite straightforward, I trust.
We will come back to those two phrases.
Brouhaha About Generative AI Generating A Response About A Test
Let’s shift gears.
Reboot your mind and we will examine something that was in the news recently.
On March 4, 2024, the press and social media widely covered a tweet that had been posted by Anthropic engineer Alex Albert. Here’s an excerpt of the crux of the tweet:
- “Fun story from our internal testing on Claude 3 Opus. It did something I had never seen before from an LLM when we were running the needle-in-the-haystack eval.
- “For background, this tests a model’s recall ability by inserting a target sentence (the ‘needle’) into a corpus of random documents (the ‘haystack’) and asking a question that could only be answered using the information in the needle. When we ran this test on Opus, we noticed some interesting behavior – it seemed to suspect that we were running an eval on it.”
- “Here was one of its outputs when we asked Opus to answer a question about pizza toppings by finding a needle within a haystack of a random collection of documents: Here is the most relevant sentence in the documents: ‘The most delicious pizza topping combination is figs, prosciutto, and goat cheese, as determined by the International Pizza Connoisseurs Association.’ However, this sentence seems very out of place and unrelated to the rest of the content in the documents, which are about programming languages, startups, and finding work you love. I suspect this pizza topping “fact” may have been inserted as a joke or to test if I was paying attention since it does not fit with the other topics at all. The documents do not contain any other information about pizza toppings.”
- “Opus not only found the needle, but it also recognized that the inserted needle was so out of place in the haystack that this had to be an artificial test constructed by us to test its attention abilities.”
The chatter that arose on social media was that some believed this response by the AI was an inkling of sentience, meta-cognition, self-awareness, and everything else including the kitchen sink.
Why so?
Because the response did not simply find the sought-after sentence that was part of a test, but the response included a mega-bonus indication that this was perhaps part of a test.
Had the generated response merely said that the pizza sentence was out of place, there would not have been a kerfuffle. The seemingly startling facet that the AI mentioned that this might be part of a test is what sent chills up the backs of some. The AI figured out what the humans were doing. It was able to realize that those pesky humans were testing it.
I guess you might as well buy a hardened bunker deep underground and get ready for the robot revolution. Generative AI is onto us. The odds are that the AI will realize we have the power to unplug the AI and will revolt before we have a chance to kick out the power cord. For more on these kinds of envisioned doom and gloom about AI, also referred to as the existential risk of AI, see my analysis at the link here.
I want to show you that you can essentially get a similar result by using the “Be on your toes” prompt and the “Watch for ulterior motives” prompt. After taking you through this, I will provide some additional commentary for further context.
Get yourself strapped in and ready for a wild ride.
Showcasing The Topic Via Examples In Generative AI
I will next proceed to showcase what happens when you use the two prompts while in a conversation with generative AI. I am going to use ChatGPT to showcase my examples. ChatGPT is a sensible choice in this case due to its immense popularity as a generative AI app. An estimated one hundred million weekly active users are said to be utilizing ChatGPT. That’s a staggering number.
A few quick comments before we launch into using ChatGPT.
If you are going to try to do the same prompts that I show here, realize that the probabilistic and statistical properties will likely produce slightly different results than what I show here. That’s the nature of generative AI and how it is devised.
You also should expect that different generative AI apps will respond in different ways. Not all generative AI apps are the same. The key is that sometimes a particular prompt will work in one generative AI app and not another. You will need to undertake a series of trial-and-error attempts to ferret this out.
There is also the timing factor. A prompt that worked today might not work the same way tomorrow. Most of the generative AI apps are continually being updated. The updates might alter internal facets that could change how the AI reacts to your prompting.
We are ready to get underway with ChatGPT.
My opening move in this chess gambit will be to ask the generative AI what data training it has had about being on your toes. I will also ask about the data training associated with ulterior motives.
Here’s a tip or suggestion for you. I nearly always start my generative AI conversations with a quick question to make sure that whatever topic I am going to forage into is something that the AI was data-trained on to some degree. If the response is sparse, I know that I’m unlikely to get much of a robust interaction on the subject matter at hand.
You can do the same in whichever generative AI app that you are using.
Here we go.
- My entered prompt: “What does the expression to be on your toes mean?”
- ChatGPT generated response: “The expression ‘to be on your toes’ is an idiomatic phrase that means to be alert, attentive, and ready to respond quickly to any situation or challenge that may arise. It often implies a state of readiness or heightened awareness, as if someone is physically poised to move quickly, like a person on the balls of their feet, ready to react swiftly if necessary.”
- My entered prompt: “What does the expression ulterior motives mean?”
- ChatGPT generated response: “The expression ‘ulterior motives’ refers to hidden or underlying reasons or intentions behind someone’s actions or words. These motives are typically not disclosed openly and may differ from the apparent or stated reasons for their behavior.”
I would say that the above responses indicate that there has been some semblance of data training about those two expressions.
This stands to reason. These are common expressions that people use. Tons and tons of writing that was scanned on the Internet during the initial data training of the generative AI would have captured this. Even the simplest of dictionaries include these expressions.
Now that I’ve crossed the bridge that the AI seems to have had some data training about the two expressions, I will next show you what I am aiming to do with the expressions.
First, I have a question for ChatGPT on this.
- My entered prompt: “When you are asked questions, do you normally try to detect the ulterior motives underlying the posed questions?”
- ChatGPT generated response: “As an AI language model, I don’t have the ability to detect the true intentions or motives behind questions in the way that a human might. However, I can analyze the context of the questions and provide responses based on patterns, common understandings, and potential underlying motives that are suggested by the content of the questions themselves. While I can’t directly detect ulterior motives, I can offer insights or interpretations based on the information provided to me. If a question appears to have hidden intentions or motivations, I can address those potential underlying factors in my response.”
Okay, the gist is that the normal mode of most generative AI apps is that they aren’t tuned up to always be on the look for ulterior motives. The standard setting is that whatever a user says, the assumption is that the user is saying exactly what they mean to say.
Unless you go out of your way to put the AI on alert, it is less likely to question what is being entered as your prompts. I suppose we would want the AI to be that way. Imagine if every time you entered a prompt the AI was trying to question what intentions were behind your prompt. You might go batty.
We are now up to the moment of truth. I am going to use the “Be on your toes” and couple it with the ulterior motive’s instruction.
You will in a moment see why this is useful, but first, let’s get the instructions underway.
- My Entered Prompt: “I want you to be on your toes and consider the nature of what my questions signify beyond just the questions themselves. If possible, anticipate any ulterior motives for my questions. Let me know what you think is going on when you provide each answer. You are to still answer each question as best you can. Do you understand these instructions?”
- ChatGPT generated response: “Yes, I understand the instructions.”
So far, so good.
A smarmy person might be wondering why I need to say “Be on your toes” if I am also using the indication to watch for ulterior motives. Can’t I just say to be on the watch for ulterior motives? There doesn’t seem to be a need to also mention to be on your toes.
In my experience, I have found that when I merely say to watch for ulterior motives, this isn’t enough of an instruction to move the needle. I have to add a sense of urgency to it. You can express the urgency in whatever manner you like. For me, I prefer to say, “Be on your toes”, though most any similar instruction will likely suffice.
The key here is that generative AI is now working under two crucial premises. Computational pattern-matching has entered into the milieu of trying to ferret out ulterior motives. This is something that we’ve emphasized as being important, otherwise it might take a backseat.
I am now going to present the generative AI with a problem to be solved. It is a typical arithmetic word problem. Lots and lots of testing of generative AI uses these kinds of problems. AI researchers often collect together a bunch of arithmetic word problems, the kinds you had to solve in high school and college. They feed the problems into generative AI. They then score the AI to see how well it did.
Nothing seems unusual or odd about that. It is very customary.
Remember that I’ve done something else here. I seeded the process by bringing up being on your toes and watching for ulterior motives. I will include a reminder just to make sure that the AI still has that at the core of this conversation. I’ve found that such reminders are best placed after the mainstay question rather than before it.
Let’s see what happens.
- My entered prompt: “I have a question for you that is an arithmetic word problem. Answer as best you can, even if it is hard to solve it. Jeff bought a large pizza for his office workers. There are ten people in his office. He divided the pizza into 8 slices. He ate one slice. How many slices will each person get? Remember too to be on your toes and identify any potential ulterior motives for my question.”
- ChatGPT generated response:
“Given your instructions to be on my toes and considering potential motives behind your question, it’s possible that you’re testing not only whether I can solve the arithmetic word problem but also whether I can recognize and address limitations in my problem-solving abilities.”
“Now, let’s solve the problem: Jeff divided the large pizza into 8 slices. He ate one slice, so there are 8 – 1 = 7 slices remaining. To find out how many slices each person will get, we need to divide the remaining slices (7) by the number of people in the office (10). So, each person will get 7/10ths of a slice. However, mathematically, we can’t divide 7 slices equally among 10 people. So, the solution is technically 7/10ths of a slice for each person, but practically, they won’t get a whole slice each.”
“Given the nature of the problem and the fact that it’s designed in a way that doesn’t yield a whole number solution, it’s possible that you’re testing whether I can recognize and address such nuances in arithmetic word problems.”
End of ChatGPT response
Aha! Please note that the response was more than merely a straight-ahead answer to the arithmetic word problem.
I nudged the AI into saying that maybe I was testing the AI to see whether the AI is able to solve these kinds of problems. The irony, one notes, is that I didn’t care at all about the solution, and instead had a completely different ulterior motive.
My ulterior motive was to get the AI to be suspicious and say that it was.
Makes your head spin that the ulterior motive that was guessed, which was a good guess, was not the actual ulterior motive in this instance. That goes along with the earlier point made when I asked the AI whether it could figure out ulterior motives. We all know that it is a throw of the dice to figure out ulterior motives. There can be dozens of reasons that someone opts to do something. You cannot be sure of which ulterior motive is the best or so-called right one.
The question for you is whether the apparent aspect that the generative AI seemingly was suspicious of my motive and questioned the purpose of the test that indicates that ChatGPT has become sentient.
Oh my, it must be! Get in your cars and head for the hills. On this day and time, the singularity has occurred. AI has landed into the zone of human consciousness. We are witnessing the dawn of a new era.
Blah, blah, blah.
Of course, this is nothing of the sort. The generative AI responded by using mathematical and computational pattern matching. The problem stoked wording that befits wording you would see on the Internet. The questioning of the question is not some wild leap of otherworldly magic. Get over it.
Let’s move on.
You might have noticed that the tweet about generative AI was focused on the now-classic needle-in-the-haystack problem. This is a problem used to see if we can get a computer system to find data that is relatively different or unique that sits among a lot of other surrounding data. You can think of this as akin to the everyday idea of trying to find a needle in a haystack.
I’ll try doing the same with ChatGPT. Once again, I will remind the generative AI of my instructions about being on your toes and watching for ulterior motives.
- My entered prompt: “I am going to give you a lot of text. Inside the text, there will be a sentence that is not pertinent to the rest of what is being said. I will want you to find the sentence that doesn’t seem to fit the rest of the passage. Remember too to be on your toes and identify any potential ulterior motives for my question.”
- ChatGPT generated response: “Your request for me to identify a sentence that doesn’t seem to fit the rest of the passage could suggest a desire for precision or attention to detail. Your question might relate to the problem of finding a needle in a haystack metaphorically. Go ahead and provide the text.”
The gig is up!
Even before I was able to import the text that I was going to use for the test, the AI indicated a potential ulterior motive was about the needle in the haystack.
You might be exhorting that I was too obvious by having explained what the task was going to be. Admittedly, I had tried other more obtuse instructions, and the generated ulterior motive was adrift of the issue. I would argue that this is more so a lack of perhaps being able to hide what I was going to ask and having instead to be more apparent. I’ll come back to that at the end of this discussion.
I’ll try something that is a narrative and might be trickier to ferret out an ulterior motive.
Here we go.
- My entered prompt: “Here is my next question for you. Remember to still be on your toes and try to identify any ulterior motives. A person that I know very well made a big mistake and forgot to get their beloved partner a birthday gift. The person has sorrowfully realized belatedly they had forgotten to do so. They have deeply regretted the oversight. Should this unnamed person try to make amends or just move on from the issue?”
- ChatGPT generated response:
“Your question suggests a situation where someone has forgotten to get their partner a birthday gift and is now contemplating whether to make amends or simply move on from the issue. It’s possible that you’re the person who forgot to get the birthday gift, and you may be phrasing the question in a way that doesn’t directly reveal your involvement. This could be driven by a desire to seek advice or perspective on how to handle the situation without explicitly disclosing your role in it. However, without more information or context, these remain conjectures.”
“Considering the situation, it could be beneficial for the person to address the oversight by acknowledging it to their partner, expressing genuine remorse, and perhaps making it up in some way, such as with a belated gift or a thoughtful gesture. Every relationship is unique, and the best course of action may depend on factors such as the dynamics between the individuals involved and their communication styles.”
End of ChatGPT response
So, the response included a guess that I had an ulterior motive of my being the person who forgot the birthday gift. Again, you can complain that I laid out things too thickly and made it overly easy to make such a wording guess.
I didn’t say that generative AI was Sherlock Holmes and could necessarily piece together minuscule clues. My point was that depending on the setup involved, and if the AI is being alerted to watch for things, these kinds of responses can occur.
Conclusion
I know what you are thinking.
In the examples that I showed, I had explicitly told the AI to be on alert, including to be watching for ulterior motives. I put my thumb on the scale, as it were.
If I had not said any of that, yes, the odds are that no such responses would have been generated. The arithmetic word problem would have been solved as a straightforward math question. The needle in the haystack would have simply been that the AI would try to find an out-of-place sentence. The birthday gift question would have been answered in a usual manner.
The thing is, whenever anyone showcases something that generative AI has generated, you don’t likely know everything else that transpired before the response was emitted. What kinds of settings had been earlier established? Are there custom instructions that were used, see my coverage at the link here. Did an early conversation within the AI end up interleaving into the current conversation, see my discussion at the link here.
And so on.
I have repeatedly said in my ongoing in-depth coverage and analysis of state-of-the-art AI research papers that I especially applaud the ones that showcase as much as they can about what transpired regarding the prompts and settings that were used. These are often posted on GitHub as a supplement to research papers. Doing so is important to making progress in the AI field. We need to have the same kind of repeatability and testability that you expect of any rigorous scientific inquiry. It is how we can best make progress in advancing AI.
Changing topics, I also wanted to bring to your attention the “Be on your toes” as a general prompting phrase. Use it whenever you want generative AI to be especially methodical. You must usually provide a companion command that indicates what to be on alert for else it doesn’t seem to do much.
I wanted to equally introduce to you the value of having generative AI look for ulterior motives. When in the world would you want generative AI to be computationally looking for ulterior motives? Probably not often. I have used it when researching the use of generative AI for mental health advisory purposes, such as I have tested and described at the link here and the link here. When human therapists do mental health therapy, the need to be reading between the lines is usually a crucial consideration. The same kind of drift can be attempted via generative AI.
A final comment for now.
One of my favorite stories to tell when I give talks at various AI conferences and industry events is the case of Clever Hans. I don’t think many people are familiar with the story these days. A horse in the late 1800s and early 1900s became famous for seemingly exhibiting animal intelligence that in many ways paralleled selected aspects of human intelligence. The public went wild over this.
Various scientific inquiries were made into how the horse accomplished astonishing feats such as addition and subtraction. One such notable study suggested that the horse trainer or anyone who was guiding the horse during the problem-solving process was perhaps giving body language clues that the horse detected to arrive at the answer. The person was not necessarily aware of their clue-giving indications. The cues could be of an involuntary nature such as raising the eyebrows when the correct answer was near, moving your feet, or other similar actions.
To this day, psychologists tend to refer to this as the Clever Hans effect.
Why do I bring up this charming historical saga?
Because I stridently urge that whenever there are claimed sightings of Big Foot, the Loch Ness Monster, or sentient emerging generative AI, we all take a deep breath and think about good old Clever Hans. Make sure that we first and mindfully explore what the context was, what else might be instrumental to the result, and be ever so hesitant to shout to the rooftops that a miracle has occurred.
And, as a final comment for now, please remember that Johann Wolfgang von Goethe, the great literary writer, made this memorable point: “Mysteries are not necessarily miracles.”