We’ve already talked a little about the ethical considerations of artificial intelligence applications because they collect data from sources which might not have given permission for the data to be used in that manner. This is a matter as we’ve already discussed about who owns “your” data.
Consider how AI tools are being trained. Have you given permission to train one? Are you being compensated for your work? But isn’t underpayment a common concern among workers?
We also have to consider that many general purpose AI style tools can easily be used to replace humans and their work.
As this is an educational environment, many people are talking about AI tools like ChatGPT, Jasper, and others, being used by students to write papers and even take tests. University students are using AI to write essays. Now what? • The Register Clearly there is a legitimate concern if a student is asked to share what they’ve learned, and all they have learned is to use a tool… what is that tool changes, or is regulated, or doesn’t have info about a new topic? What if the writing isn’t as clear, or personal? Rather it might be overly… dare I say… mechanical?
Is this an example of people not using tools available to them, or a cause for real concern. I am old enough to remember teachers not wanting you to have a calculator because you won’t have one with you every day of your life (hello smart phone). I also remember people being concerned that spell check will make it so people won’t learn to spell (this is mostly true – especially if you read posts on social media). I even remember people railing against computers, as you should learn to type on a typewriter, the way it was meant to be learned.
So are people not using AI just modern day Luddites? Or is there something to this?
Consider the following situations:
- Imagine going to a lawyer to draw up a case because you were injured on the job. But they only used an AI tool to do it, instead of actually doing the work. Should they be paid the same? Would you feel cheated? What if the information is wrong? US judge orders lawyers to sign AI pledge, warning chatbots ‘make stuff up’ | Reuters and NYC judge scolds attorney for submitting a brief filled with unintelligible legal jargon created using ChatGPT bot, while lawyer blames the robot claiming it duped him. | ChatGPT Global News
- Imagine going to the doctor and they use an AI tool to diagnose you. ChatGPT outscores med students on complex clinical exam questions (medicalxpress.com)
- Imagine you are asked to write an article based upon your experience on topic ___________. Someone offers to do it, while accepting less pay. You find out later, they just used an AI tool to do it.
- What would your opinion be if your professors only created lesson plans and notes based off of AI tools? Would you feel cheated out of your education?
- Would you want a professor who used AI to write their papers, so they might not actually know the topics they are supposed to be teaching you on? (While AI tools aren’t old enough for this to happen yet, give it about 2 years, if that, and I can promise there will be some like that.)
Each of these questions are examples of things which are currently being looked at/done.
What if something is wrong?
Another question is how does someone who hasn’t learned a topic, know if they are learning it correctly? i.e. I, as a seasoned professional, can type something into a search engine or AI, and call BS. Someone who is learning a topic, may not realize they are being feed nonsense.
Who is liable if the AI tool makes a mistake? Who is to believed, the AI tool, or the human? How do you check?
Let me give you an example, based on a real life situation, where there were two humans involved. I was working for a company and one day a tool that I helped administer started generating an error code. My coworker and I both got a help desk ticket about it. She looked up the error code in Google, and immediately gave out the answer that Google provided. I immediately ordered our team to ignore that answer and not implement the fix at any cost.
Now you might think that is crazy, undermining my coworker like that. However, Google’s first result, which I had also seen, was involving a different version of the software, and it would have actually forced our company to roll back the version of the software we were using, and we would have lost necessary features.
I found the right fix about 10 minutes later. It wasn’t the first Google result. It wasn’t even on the first page if you just search based upon the error message. I had to research, the answers to find the right one. AI tools are not, at this point, error checking themselves, much like my coworker didn’t do, so that she could be first with the answer.
So how do you utilize a general purpose AI tool? Should you? And what role should a government play in regulating this type of technology? Remember different countries might have different rules and that could drive companies out of one country to develop products in a country which is more allowing of their technology.
The Ethics of a General Purpose AI was originally found on Access 2 Learn
2 Comments
Comments are closed.