Securely Building AI That Builds AI

SOFTWARE DEVELOPMENT

Artificial in­tel­li­gence, at any level of un­der­stand­ing, is in­tel­lec­tu­ally in­ter­est­ing, but mostly philo­soph­i­cally (and at times, eth­i­cally) un­cer­tain.

I have a psy­chol­ogy back­ground, and when I was re­search­ing this topic, all I could think about was the par­al­lel I could draw be­tween the hu­man brain and an ar­ti­fi­cial neural net­work (then I dis­cov­ered it is be­cause neural net­works are de­signed on the brain!).

This is an image showing the various uses and applications of AI in business today

In psy­chol­ogy, the brain is of­ten re­ferred to as a “black box” — mean­ing we have no clue what goes on in there, in an ab­solute man­ner. Sure, we can run fM­RIs and EEGs on the brain it­self, but at the end of the day, we are in­fer­ring based on years of re­search and stud­ies.

Progressing this metaphor, if we think about be­hav­iour and how we as hu­mans learn, at a very ba­sic level, we re­fer to Pavlov’s dogs in clas­si­cal con­di­tion­ing or the con­cept of pos­i­tive re­in­force­ment and neg­a­tive pun­ish­ment in op­er­ant con­di­tion­ing.

This is a graphic showing the pavlov's dogs in classical conditioning concept

When we as hu­mans learn new be­hav­iours, we have the above frame­work in­flu­enc­ing our de­ci­sions to con­duct those be­hav­iours un­til we no longer have to think about them (to a cer­tain ex­tent of course). For ex­am­ple, dri­ving a car. When we were teenagers with our learn­er’s per­mit, the act of dri­ving a car was an in­tense learn­ing ex­pe­ri­ence. Although, af­ter pro­gress­ing through the lev­els of those li­cences this be­hav­iour is some­thing we might call “second na­ture”.

What pro­gresses this in­tense ex­pe­ri­ence to sec­ond na­ture?

It is in the push and pull of the con­se­quences our be­hav­iour pro­duces. In the learn­ing to drive ex­am­ple, a pos­i­tive re­in­force­ment might be pass­ing the dri­ving test to be able to drive un­su­per­vised. On the flip side, a neg­a­tive pun­ish­ment might be the loss of that li­cence due to un­law­ful dri­ving be­hav­iour.

Cognitively speak­ing, we don’t re­ally know what hap­pens in the brain to code the good and the bad to there­fore shape our be­hav­iour, but we do know that is hap­pens through this kind of op­er­ant con­di­tion­ing.

I am sure you have come across this con­cept on nu­mer­ous oc­ca­sions, so I won’t delve into any fur­ther into ex­am­ples. The im­por­tant thing here is to draw the sim­i­lar­ity be­tween what we in­put into sys­tems (organic or man-made) and what we get out of them — with­out re­ally know­ing what hap­pens in the “in-between”.

Artificial Intelligence is the idea that as hu­mans we can pro­gram a com­puter or de­vice to con­duct be­hav­iours based on a stim­u­lus (Britannica, 2022). Although, with what I have pre­sented in the be­gin­ning of this ar­ti­cle, this over­ar­ch­ing de­f­i­n­i­tion might be too broad, as it has­n’t touched on the com­plex­ity in­volved with net­works.

Re-introducing the con­cept of neural net­works and how that re­lates to AI, we need to con­sider that AI is not a sin­gu­lar school of thought, there are mul­ti­ple lev­els to ac­knowl­edge.

The 3 lev­els of AI, as de­picted by Codebots are:

  1. Artificial Narrow Intelligence
  2. Artificial General Intelligence
  3. Artificial Super Intelligence

This is a graphic showing the three levels of AI

Deep learn­ing sits in level 2 (hey look a brain im­age — can’t think why that might be!). Deep learn­ing is ac­tu­ally de­vel­oped with the brain in mind (which is why I chose a psy­cho­log­i­cal metaphor for this ar­ti­cle).

If it is not graph­i­cally rep­re­sented in an or­ganic way, an­other way its com­plex­ity can be demon­strated is through a net­work of data points.

This is an image of a infinitely wide neural network

Looking at this graph, we can see there are a lot of fluc­tu­a­tions and pat­terns all within a set of pa­ra­me­ters. For any math geek out there or stats en­thu­si­ast, this does­n’t look too hec­tic in the sense that it is con­tained and some­what pre­dictable. Although, its pre­dictabil­ity comes from the pa­ra­me­ters be­ing set, and the out­puts be­ing ide­alised.

It is in­ter­est­ing, that with this ‘artificial’ in­tel­li­gence, we still want (and need) to be in con­trol of the out­puts.

Deep learn­ing is still in a rel­a­tively prim­i­tive state, there­fore it is quite im­ma­ture and learn­ing con­stantly. The (ideal) op­ti­mal state of op­er­a­tion would be for neural net­works to pro­duce re­sults as ef­fi­ciently as the hu­man brain. Sure, we have com­put­ers and net­works that are able to pro­duce re­sults in mil­lisec­onds. Google’s search en­gine is a per­fect ex­am­ple of this. What a neural net­work will do on top of that is to con­tinue to learn from every in­put, search process and con­se­quen­tial out­puts, to start build­ing a sys­tem that op­er­ates within it­self.

Operates within it­self

When I was read­ing more and more about deep learn­ing and neural net­works, I was strug­gling to grasp with the eth­i­cal is­sues, or gov­ern­ing is­sues, that could arise with this level of ar­ti­fi­cial in­tel­li­gence.

Red flags every­where!

Essentially, we are cre­at­ing the foun­da­tions for a sys­tem to build its own big­ger sys­tem and we are launch­ing this into the wild.

My red flags:

Bias

This is an image of an equation

This is the equa­tion to which neural net­works op­er­ate with. Notice the bias weight? This could be any­thing, gen­der, racial, in­tel­lec­tual, etc. My is­sue with it is, if ar­ti­fi­cial in­tel­li­gence, ide­ally, is to re­move the hu­man fac­tor (which, more of­ten than not, this bias is pre­sent), so why are we adding this bias weight to both sides of the equa­tion?

Development of ef­fi­cien­cies

Earlier I men­tioned that in learn­ing be­hav­iours we, as hu­mans, de­velop ef­fi­cien­cies. One of the po­ten­tial dilem­mas that might arise from is that neural net­works may (or will) learn new be­hav­iours of per­form­ing tasks. This might sound like a nat­ural pro­gres­sion, and in some cases, this will be the rea­son we use sys­tems such as this.

The coun­ter­ar­gu­ment here is when there are spe­cific meth­ods to achiev­ing an out­put, but in place for an ar­ray of rea­sons. One of these rea­sons could be as bla­tant as le­gal re­stric­tions. Sure, there are pa­ra­me­ters we can in­put into these sys­tems that can re­strict the net­work to pro­duce an out­put that is le­gal (in this case). The is­sue lies when the neural net­work works out how to work around these pa­ra­me­ters to pro­duce the out­put.

This is quite a philo­soph­i­cal ar­gu­ment and there are many of opin­ion pieces avail­able to the cu­ri­ous pub­lic. Fair warn­ing though, it starts to sound like a chicken and egg sit­u­a­tion very quickly!!

I like to think about it with an ex­am­ple — and be­cause we have used the car ex­am­ple pre­vi­ously, I will stay on that av­enue.

When you get in the car, you pop your seat­belt on and then start dri­ving. Whether you are a seat belt on, then ig­ni­tion kind of per­son, or the other way round, the seat belt se­cured be­fore ac­tively dri­ving the car is the process we fol­low as safe in­di­vid­u­als.

So, what if you com­bined those ac­tions; seat belt on whilst dri­ving? Sure, it would be more ef­fi­cient, but who’s to say you are pro­tected from an un­for­tu­nate event such as an­other car col­lid­ing with yours be­fore your seat belt is on, there­fore you are not se­cured?

This is when the in­tended method, should not be worked around.

Switching it off

My fi­nal red flag did­n’t come from re­search­ing, it came from the great con­ver­sa­tions I have with my peers and col­leagues. One of our ma­jor con­cerns that arises from this ad­vance­ment is the abil­ity (or lack thereof) to switch it off.

Logically speak­ing, you should be able to “pull the plug” on any­thing elec­tronic. Although, a self-teach­ing neural net­work might be able to teach it­self to ex­ist in an­other space (like a virus) and be re­ac­ti­vated in an­other space. Scary stuff…

This ar­ti­cle is­n’t meant to scare any­one into switch­ing off their de­vices and get­ting off the grid, this tech­nol­ogy is so many years away (maybe not even in our life­times). The pos­i­tive to take from this is that there is so much dis­cus­sion, the­o­ries and hy­pothe­ses that are con­stantly de-risk­ing and pre­sent­ing cy­ber-se­cure frame­works so when this tech­no­log­i­cal ad­vance­ment launches, we will be ready!!

Discover Software
Secrets

ABOUT THE AUTHOR

Alice Spies

KPI mo­ti­va­tor and res­i­dent head chef

Get cu­rated con­tent on soft­ware de­vel­op­ment, straight to your in­box.

Migration vs Rebuild

12 November 2018

The top tech­nol­ogy frame­works you can use to build a mo­bile app

21 January 2020

How does end of life soft­ware im­pact you?

10 November 2020

Your vi­sion,

our ex­per­tise

true