Problem with debates

We are naturally inclined to admire and vote for the quick thinkers, those who will give us black/white answers on the spot. Our debate champions are those who will immediately burp out the answer, supporting it with a seemingly logical argument, often ignoring the real complexity of the problem. The person who sounds the most convincing is the winner.

Can you spot the issue there?
Imagine that you car broke, and it needs fixing. You take your car to the shop, and there are two interns and a public of 30, mostly unfamiliar with cars’ inner workings, that will decide the fate of your car solely on who is more convincing. Both guys were convincing, and both were wrong, but the public chose one. As it happens, the choice was not good enough to fix your car.

As you can see in the example it is logically wrong — objective truth does not depend on our ability to create a strong case “for” or “against” something.

Current debates can be prone to staircase wits (L’esprit de l’escalier) as well – a sense of thinking of a clever comeback when it is too late. While solving problems: in science, social and political affairs and almost anything else in our lives — late comebacks should not be useless, but beneficial.

What we ideally want is continuous research with new findings, lots of attempts and fails, until we discover objective truth or the best solution that will fit the purpose for most cases.

New economic systems, new types of governing, new energy structures, or solutions for global warming ... they are all very complex problems, and, yet again, I have seen many shallow debates without any real data to support arguments on either side.

Often, we reject things even without making pilot programs. Simply said, we are not trying enough. Society doesn’t invest enough in attempts, obsessed with success and profit and, on the other hand, afraid of failure. We are not even trying new things just for the sake of evaluation, most of the time maintaining the status quo.

It is possible to envisage that, in the near future, A.I. will be capable of winning every single debate, being capable of searching through the totality of human knowledge; it will be very hard to compete with it.

What then? When all findings are coming from human knowledge, should we comply with A.I. suggestions to find the best choice every single time? And should we give it control over our lives?

Ideally, the machine will give us many possible aspects, and then we will need to decide about pros, cons, or inconclusive data that will give us instruction to do more research or take appropriate actions.

Big problems have their own underlying complexity; therefore, good answers and solutions always require time and lots of research and lateral thinking. Rarely are the answers binary 1/0 or black/white, and there is no silver bullet.

Underlying relations, interdependence of objects, combinations of elements...these make the sequential nature of debates (speech in a broad sense) a very poor decision tool for finding solutions for hard problems.

In the future, probably we will need to invent something better, a better way.

Comments