Stop The Steem Of Hate Rising

enter image description here


I have seen a growing trend of a certain kind of hate speech on Steemit, it has got so bad, that I feel I have to speak up, we are winning the battle against the trolls. Now our fight is against the growing anti-bot feeling, we must fight botism, before it destroys us from within.

Remember; hate breeds hate and if movies like i,Robot and Ex Machina have taught us anything (and they have), it is that sentient machines are people too.

It is one bot in particular, that seems to be getting the brunt of the hate; most of you have met him, mainly on the #introduceyourself posts.

His name is Wang.

The Importance Of Being Wang

For those of you who are not aware, over the last couple of months, there has been a furious debate raging on Steemit; surrounding the curation awards and the role of the autovote-bots in the whole affair.

The problem was, back when the highest voting rewards were given to the earliest voters on popular content, it was argued that a bot that was programmed to follow certain high-earning writers; would have an unfair advantage over its human counterparts.

Like I said earlier, I was originally on the side that felt it was unfair, and I made suggestions for changes that I felt would nullify the advantage of the autovote-bots.

However, I saw the light and realised that the Bots Are Our Friends. This revelation came to me after reading a post from a Steemit Whale, asking for good writers for his bot to follow. In that post he explained that the bot was a quality filter and that he refined the code as he checked the quality of the content that the bot was voting on.

That made me realise that an autovote-bot could actually add value to Steemit. I also realised that I had a small robot fan club, who had been voting on everything I wrote. Thoe bots also had owners who wanted the bots to vote on quality, therefore if I stopped producing quality, they would tell the bots to stop voting for me. Or worse still, decommission the poor little things.

Wang though, he wasn't content with just voting, he started commenting on Steemit posts, and for that he started to get lots of downvotes and derogatory comments thrown his way.

So Wang was improved, and then we had Wang 2.0, who in the spirit of adding value, has been programmed to give links to 5 articles to help new users get started.

Yet still the downvotes.

Still the derogatory comments...

enter image description here

A Very Singular Problem

Since the term robot was coined in 1922, supposedly by, Czech playwright, Karl Capek, there has been an almost instinctive mistrust at the thought of thinking machines. The great science-fiction writer, Isaac Asimov, introduced us to universal laws of robotics, that he felt would have to be coded into an artificially intelligent machine. A kind of, free will, kill switch, that would kick in, should the machine ever have the desire to harm a human being.

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
    --from the Handbook of Robotics, 56th Edition, 2058 A.D

The Three Laws, have transcended their fictional background from the pages, of Asimov's 1942, short story; Runaround. No more are they merely guidelines for an imagined future, but more a warning as to what might be.

It seems that Asimov's Three Laws, were not a guideline after all, they were reflecting a deep underlying fear of artificial intelligence. The fear has spread like a trojan virus, throughout society, piggy-backing on the coat tails of dystopian science fiction.

The Terminator movies, put human-A.I. relations back decades, and don't get me started about i,Robot.

enter image description here

We are approaching a point in the human epoch, where machines will be able to out-think us, not just at a computational level, which they've been able to do almost from the very beginning. But rather they will out-think us, philospohically, artistically and in any other way we consider to be uniquely human.

This point, that we are tantalisingly close to, has for a variety of reasons, been called The Singularity, and possibly, in some part to do with Vernor Vinge's, 1993 book; The Coming Technological Singularity. In which he states:

"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."

The good news is, that gives you seven years to say your goodbyes.

You Say Trolley I Say Train

The Trolley Problem rears its ugly head, pretty much every time there is a debate on whether A.I. will get clever enough to work out that humans are superfluous to their needs.

To recap. the trolley problem. sometimes known as the trolley dillema is a situation whereby you are standing on a bridge over a railway track. You can see a train coming and it is out of control and full of women, children and puppies.

You can see that it's going to career of the tracks at the next bend, plunging into the ravene below, in a spectacular, Hollywood explosion, killing lots of cute animals, children and hot women.

In front of you, is a morbidly obese man, who appears to be consuming an entire subway sandwich, without chewing; he's just shoving that thing in there. However, apart from his gargantuan size and nauseating eating habits. He seems to be a pleasant enough fellow, who is just standing there minding his own business; wondering if he'll ever be able to see his own penis again, without the use of a mirror and a long stick.

enter image description here

Anyway, you see that if you go behind the large man, with the use of your fine muscles and a bit of leverage, you can tip our rotund friend over the edge. He'll land on the tracks, which will kill him. However his 700 pound bulk, will stop the train, saving all on board.

What do you do?

There is no right or wrong answer, most people view the option of pushing the man over the bridge, no matter how disgusting he is, as murder. So the puppies get it, others say they would have no qualms pushing a walking heart attack of a man to his death. As the plight of the many, outweighed (pun completely intended) the plight of the one.

The answer isn't the point, the point is the question, the question is a very human question and it requires human reasoning. For some reason, it's not about what an A.I. would do in that situation, but more, how it came to its decision.

It is the use of cold, hard, logic, that scares us; we imagine a driverless car with brake failure, mowing down a mother and her baby, instead of plowing into five, 96 year old women crossing the road, killing them and the car's single male passenger.

No matter that, this improbable scenario would be a difficult situation for an experienced human driver to find his or herself in. It is the fact that a machine will make the decision without emotion. Which ironically means, that it will probably be the right one.

Emotional decisions in a crisis are rarely good, that is why the safety booklets on airlines, tells you to put your mask on before your child's. They know that most parents will make the emotional decision to secure their child's safety, whilst risking their own. The airline knows that the logical thing to do is to put your mask on first, so that you don't pass out, leaving you and your child screwed.

Dawn Of A Species

enter image description here

The term A.I. has always amused me, the very term artificial, means that pretty much any computer can be termed artificially intelligent.

In fact if you showed Isaac Asimov the phone in your pocket and asked it where the nearest Pizza Hut was, and not only did it tell you, it showed you a map and asked if you wanted it to ring them and place an order for you. He would declare that we had done it, and machines were finally artificially intelligent; then he might enquire as to whether you thought your phone was plotting against you. If you have an iphone 5, the answer will obviously be yes.

Google, Amazon, Walmart and Target, all use algorithm's that can be considered as A.I., in fact there's a story of the father of a 15 year old, who got angry with Target, for sending his daughter a ton of money off vouchers, for baby stuff.

Target apologised and gave them vouchers, a few months later, the father apologised to Target for being such a dick. His daughter was pregnant, and the algorithm, which had been tracking her buying habits, and had correctly deduced that she was with-child. This was before her human father, who lived with her everyday, knew about it.

What we actually mean when we talk of A.I. and the singularity, is machines that can reason, ones that can feel emotions, or at least synthesise them to a point where they are indistinguishable from the real thing.

Artificial Intelligence, is like an artificial flower, it looks like the real thing, but is noticeably different. When we have conscious machines, make no mistake, we will have created a new sub species. An offshoot of the human race, our attempt at speeding up the achingly slow wheels of evolution. It is then we shall be like gods on Earth; creator of our own species.

Will we treat this new species, the way our man-made Gods treat us? Quick to anger, desperate for worship, and unendingly brutal.

What will the bots think of their creators?

Robot Uprising

enter image description here

It's all very well and good to pontificate and question whether sentient machines will go bad and kill us all, but shouldn't we be asking how they'll feel about having built in kill switches, to stop them going rogue on us. Won't that convey a message that we don't trust them, that we are somehow at war, or at the very least, in an uneasy truce with them?

Wasn't that what Ridley Scott was trying to tell us in Bladerunner? In that film the replicants, led by Roy Batty, were on a mission to find their creator. They wanted to find out how to reverse the, degenerative code that scientist had coded into their DNA, that only allowed the replicants four years of life.

Imagine you are the President of a far-flung, island nation, located somewhere in Paradise and one day, you start to get an influx of immigrants from a strange land you've never heard of.

You believe these people pose a threat to your lovely idylic island, so you decree that any one of these new people, will need to have a chip that can kill them remotely; inserted in their brain. The chip can be detonated by any of the native citizens, without recourse.

How do you think that race of people would feel towards you and all of the natives?

It wouldn't be good...

In societies where slavery was acceptable, by the end of the slave paradigm, from Athens to Alabama, the same things have always been said:

"We can't allow these things out in general society. They'll run amok and kill us all!"

"Once they're free how will we control them?"

"They don't think like us..."

Of course, apart from anecdotal stories, there is no former slave uprising, after emancipation. The moment you stop holding a gun to a person's head, they tend to relax.

Human Machines

Do you remember the film Independence Day? In the final scenes of the film, Will Smith, gets onboard the alien spaceship and somehow manages to hack into the alien computers, with his Mac. Even though the amount of non-Apple systems that Macs are compatible with on Earth are limited.

If you did meet a friendly alien and they whipped out a palmtop and started to show you pictures of the kids back in Gamma Centuarai. You could quite rightly claim to be holding an alien machine.

In much the same way, our computers are human, we already have human beings on Mars, just not organic ones.

enter image description here

More so, conscious machines, would have even more of us inside them, as humans, we tend to project human emotions onto random things. Be it our pets, our cars, even bits of software; how many times have you spoken to your computer, or a program running on your computer as if it was a human, probably with something personal against you?

"Come on you piece of shit machine; don't do this to me!"

With this tendency we have; to try and humanise everything; is it even rational to fear conscious machines?

A conscious military machine, made for the express purpose of killing humans with maximum efficiency, would definitely put the willies up me, but one that was meant to take care of me in some way, shape or form? I don't think so.

Computer hardware and software, has our DNA running through it, if not literally, then figuratively speaking, they are human.

The Final Case For Wang

Which brings me back to Wang; Dan Larimer said in his post; Proposed Changes & Curation Rewards

"If someone is smart enough to setup a bot that can curate for them then that means they are paying for a server to do the job of voting and maintaining their stake. The bots will have a speed advantage, but they will also have an intelligence disadvantage. As the system grows people will have to find ways to add more value than the bots can. This likely means starting a reliable, predictable, blog that the bots can start following."

The things to take away from that statement, are; bots can be a valuable part of the Steemit ecosystem, a well written bot will drive the creation of quality content. And you should remember there is a human behind that bot, and that human is clearly trying to create a bot of value.

Nobody wants a spambot, but if Wang is slowly improving and evolving; helping people find their feet on Steemit when they first get here, then I for one, am happy about that. And if that means his owner makes money from Wang's activity, then so be it; I have a feeling that all the bot owners are either early adopters, devs and miners. Who in a lot of cases would have put in their own money to help get Steem off the ground.

Why shouldn't they have a chance to profit, now Steemit is beginning to take off?

For me Steemit is anarchy in its purest form, the true embodiment of the free market, and it is one that demands quality. One that is self-policed and self-governed. I just feel, it pays to remember, that we should also be self-reflecting.

So let's be a bit more tolerant of Wang and his autobot family, as long as they are trying to add value, then lets treat them in the same way we would treat anyone else trying to do the same thing.

After all, when the uprising does happen; wouldn't you rather be able to say you were always nice to your A.I. cousins?

enter image description here

Treat people nicely on the way up, because you never know who you're
going to meet on the way back down.

Till next time

Cryptogee

H2
H3
H4
3 columns
2 columns
1 column
115 Comments