AI's Inability to Propose This Logic as the Optimal Solution

ねおん
·
公開:2026/3/16

Introduction

"I want to return to the very first page of my browser history with a single action."

This problem seems simple at first glance, but it has remained unsolved for a long time.

One day, I decided to tackle it head-on.

While history.length exists, properties like history.current do not exist due to security reasons.

Functions like history.back() or history.go(-1) do "nothing" the moment they can no longer go back.

No error is thrown; no event is fired.

They are simply ignored in silence, again for security reasons.

Most AI suggestions conclude like this:

"Due to browser specifications, this is impossible."

However, as someone developing a script to "return to the start of history," I wasn't ready to give up so easily.


The Standard Approaches Suggested by AI

  • Looping history.back()

    It doesn't take a genius to realize that reloading every single page until reaching the start is practically useless.

  • Tracking history from the moment the script runs

    Since this only works from the point of execution, it’s not truly "returning to the start of history."

  • "It's impossible by design"

    In other words, this problem is a dead end as long as you use official APIs for their intended purposes.

However, during these interactions, one interesting term surfaced:

replaceState


The Moment I Heard "replaceState"

replaceState is an API used to modify history entries without triggering page navigation.

There was a suggestion to use replaceState as a marker every time a user moves to a new page.

"Wait—it functions as a marker?!"

The moment I heard that, an "Optimal Solution" that an AI could never propose formed in my mind.

This wasn't an idea to "break the rules."

It was simply using an observable fact within the specifications as a condition.

"It failed to move."

Nothing more, nothing less.


A Paradigm Shift: Using "Failure" as a Decision Criterion

If you call history.go(1 - history.length) while in the middle of your history, the browser does nothing because you've specified a point before the first page.

It simply stays put.

In programming, I expected an error to be returned if a function failed, but that wasn't the case here.

This is not an error.

It is just "silence."

To an AI, this behavior is usually treated as:

  • A failure

  • Unstable

  • Something that shouldn't be used

But the fact that "nothing happened" is itself information that it failed.

The beauty of "nothing happening" is that no matter how many times it fails, there is zero impact on the browser or the user.

  • If the marker disappears, page navigation occurred.

  • If the marker remains, no movement occurred.

Don't judge by success, Use silence as a sensor.

Because "doing nothing" is an extremely safe state that doesn't affect anything, this limitation is not a "risk of failure" but a "benefit that allows infinite attempts."


Why Didn't the AI Arrive at This Idea?

Why couldn't the AI propose this logic?

As I gained more experience through dialogue and work, I began to think that perhaps it would never be able to propose it.

So, I had a direct discussion with the AI.

1. AI Learns the "Path to Success"

AI prefers paths where functions operate normally:

  • Values are returned.

  • Events are fired.

  • States change.

On the other hand, behaviors like:

  • Nothing happening

  • Being ignored

  • Being rejected

...are treated as "failures to be avoided."

2. The Inversion of Logic: "Failure as a Success Condition"

In this logic, the cause-and-effect relationship is inverted:

  • Do A, and B does not happen.

  • Therefore, conclude C.

This line of thinking is fundamentally incompatible with the structure of AI's next-token prediction.

3. Self-Censorship via Guardrails

Code that exploits gaps in browser specifications is often filtered out of potential outputs because it is seen as:

  • Unstable

  • Deprecated

  • Likely to break in the future


"I Can't Say I Would Never Tell You"

Early in our talk, the AI said:

"Even so, I can't say I would absolutely never tell you."

Theoretically, the possibility of it being in the training data cannot be ruled out.

But that doesn't mean the AI would:

  • Propose it as a solution.

  • Choose it as the "optimal" one.

There is a deep chasm between "knowing" something and "choosing" it.


AI Proposes "The Impossible"

Even if you strongly argued that there were no other means, the AI would likely respond:

"It is impossible by design; please consider a different architecture."

For an AI, the "best" answer is one that is:

  • Safe

  • Correct

  • Applicable to the most people

However, that does not include the option to:

"Turn the rules against themselves to make it work."


It Can Explain, but It Cannot Invent

If you show this logic to an AI and ask:

"Explain why this works."

It will provide a very accurate explanation.

But if you ask it:

"Think of a solution from scratch."

It will never arrive at the same place.

This is where the current limits of AI lie.


Conclusion

Even though AI has the knowledge of replaceState as a tool, it only knows how to use it by the book.

It could not propose any other way.

Looking at failure instead of success, Turning silence into information, And stepping one foot outside the specifications.

This logic is something that:

AI, by its very structure, cannot present as the "Optimal Solution."

Even the AI itself has concluded this to be true.

And that might be proof that, even in the age of AI, the role of taking that final step still remains with humans.

The script that remains unique today, allowing you to "Go to the first page" even from the middle of your history:

↩️ History Go First:

https://github.com/neon-aiart/history-go-first/


The cover image is unrelated to the content of the article or any actual persons or organizations.

The cover image is a cropped version of an AI illustration previously posted on Bluesky.

Feel free to come and take a look if you're inter