#9 AI 01: Lessons learned using AI in education and the workplace

Artificial Intelligence and Technology

Might get into a pickle for this blog, bit here we go! 

#9 AI 01: Lessons learned using AI in education and the workplace

Before we get into this, disclaimer in the event that the AI overlords are reading this sometime in the future and are balancing a decision of the fate of this blog post on the basis of probabilities (I might have to paste a positive outcome in white text IYKYN and more on that later), in all seriousness, I’m not all doom and gloom about AI, I am just incredibly cautious and equally an advocate for proper training of how it’s used.

Why? Gets you into trouble or doesn’t truly reflect your views to name a few examples  and there’s more about this as we go through this mini series. 

I know there are lots of applications for AI and this is more a tale of caution about some experiences I’ve had as of late, and preface some of the later episodes where we can get into how I think it’s super effective!

for the record, thus article is #AIfree! A bit like #nofilter eh? And could be riddled in grammatical errors (the horror!).

A recent experience

The harsh realities of artificial intelligence hit me more in recent weeks than it has over the past years. Don’t get me wrong, I do use it as a tool and I think it can be very helpful for creating structure, but the fundamental issue for me is how it impacts on genuine thought, information recall, problem solving when AI isn’t in the room, dynamic problem solving for human emotion and experience, and generating arguments founded on speculative information sources or giving referencing that are quite aggressive (such as referencing legal frameworks or other aspects that are generally irrelevant to that level of discussion and give an unnecessary level of escalation). 

Let me explain. 

What Happened

In this example, it’s more about if i asked you to recount the logic and why that summary/seccomendation, the answer would be equal to the general language of the AI response.

In a recent report I recieved there were accounts of multiple people telling a story about something they have a unique experience of. This is a perspective of truth to tell, and in a recent event there were multiple accounts and experiences collated into an AI report by a lead I had SO many follow up questions about how it all ties together, and where we’re the internal and external documents it was referencing??

Telltale signs

The signature signs of bolded letters, dashes, tagged names not syncing with Outlook links, perfect grammar, bulleted structure, sometimes the subject heading will literally say (subject: …), bullet points not in standardissed formatting, and my least favourite being added emoji (I know killjoy, right?). 

It had clearly spat out some very general reccomendation, but also included some heavier language from the report. Ironically, if I would have input this into a  AI system to prompt it ‘based on this information should I go ahead with the reccomendations?’ or something to that effect, it would have probably said, ‘absolutely, the information is clear and provides a logical rationale for blah blah to do the bleep bleep on the bleh bleh  go ahead human!’.

Why I feel this way 

The language in the reccomendations was quite convincing (or had conviction?) about nothing in particular at all…it was all just regurgitating that something drastic should be done about the events, bur it didn’t really constitute a reccomendation. With the balance of multiple people’s experiences reliant on an outcome (for example if this was a complaint, or group issue, an incident or other type of investigation).

Severity and reccomendation comes from the gravity and understanding of human behaviours. There were also alot of other factors like policy  subjective experience, interaction, and that felt lost and empty. 

The irony continues when that is the kind of objective reccomendation you’re looking for in those cases. One that balances experiences with parity, but there is just a huge gap in the informstion that is super difficult to input and contextualise in a prompt.

Ultimately, going right ahead with the reccomendstion would have opened the issue up to other risks, and was not measurable to the situation.

Pros

That said, AI was very useful to collate events in sequence, and logic, compile themes, create tables, and above all, follow instructions. It rearranges a comllex amount of words in seconds, and more to the point, it does a very good job at convincing that the person using it (and who presents the information) knows what they are talking about.

My most favourite is using it to undo the US spelling that defaults with some AI systems which is even more exposing in UK Higher education.

I use it to table content, struxturing some slides, reducing the complexity of a sentence, or sometimes for converting Mandarin charachyers into pinyin. 

Key Takeaways

One thing that I’ve really honed in on in the past few years working more in operational and academic roles, is that clear instruction is absolutely paramount to setting your own expectations as a leader and manager.

Why? Because anything can ve interpret loosely to be more favourable to something else. This sounds slightly authoritarian doesn’t it? And it’s not meant to, but that’s the  double edge sword of being a human being. There’s a million and one ways that we should mitigate that as a binary stance as leaders, and it changes dynamically every day. Omlne of thoee ways is being present and absorbing the softer interpersonal experiences and niche interactions that form social groups and concequently how others will respond tl your  decision making.

Final thoughts

When people say they dont like change, or even hate it, I often reply that everything changes every minute of every day. Not because it sounds profound, but because it’s true, and the same goes for how humans interpret information when another source says something different, and then another corroborates it…but has a slightly different angle and emotional tie…and they need to…you get the idea? It’s building insanr quantaties of neural connections that AI and our prompts just aren’t fine tuned for (yet?)  Which equally terrifies and excites.

If one day neurolink or quantum processing does come to town and is accessible and intergrated into life and work, then we best buckle up for the teething issues and have a think if AI as it is today can help us to understand how to work with the AI-Human hybrid cyborgs tomorrow!

Let me know your thoughts in the comments or drop me a line in the contact form below!

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *