AI has revolutionized every industry, and there is no denying that. Generative AI is the ultimate disruptor of data and software. With 45% of tech leaders prioritizing AI spending, and projections showing a fourfold increase in data center spending on AI processors by 2025, the future is truly AI-driven.
Recently, Tech in Motion delved into the intersection of data and AI in 2024. Our expert panel - Serena Sacks-Mandel, International Award-winning C-Level Technology Leader; Ron HR Johnson, CEO/Founder @ AI Startup, Former Global Tech and Cyber Executive @ EY and FBI; Victor Montgomery, Audit Director @ State Farm Insurance; and Amit Dingare, Chief Artificial Intelligence Officer @ PRGX Global Inc.- shared insights on the latest trends, from the rise of citizen users and small language models to the resurgence of data modeling and the crucial skill sets needed to navigate and optimize data flows within GenAI systems.
Sacks-Mandel: What are your top trends for data and AI?
Johnson: There are four things I'm seeing right now that I think are quite interesting. One is large language models are changing to something called small language models specific to a domain. There is a lot of focus in that area because I think it's removing some of the issues that we had with the large language models for general business domain applications.
The second thing is the emergence of virtual agents where the agents can actually do multiple tasks in a certain sequence. An example I’m working on right now in my free time is building an agent that can actually do trip planning for you, which includes booking your flights or getting you the flight information, and then telling you what flights to choose. Then depending on the destination, telling you what things to do when you get there.
The third thing I'm looking at is called multimodality. These models are going outside of language and now you are seeing images, videos, sound everything coming into that. They are all coming together, so are a lot of interesting applications especially in the movie industry.
The final one is connecting actions with the languages, for example, how do you connect robotics with AI? Then you can start getting the benefits of AI in the real world.
Montgomery: One interesting trend is niche, or boutique startups being generated based on AI. Previously, the barrier of entry into some of these companies was extremely high, but AI has enabled us to go out and explore domains that sometimes we're not even skilled in, but we can read or listen to a video, take the instructions and see if it really works, and do it very inexpensively.
Johnson: One of the downsides of off-the-shelf large language models is that the more you use them the worse they get, and a lot of people don't understand that they need a significant amount of maintenance. So, to keep those at a high-quality output we're fine-tuning them and then also building those smaller large language models with specific use cases.
Sacks-Mandel: I focus on the education industry and want to mention that AI tutoring bots are becoming very popular. The thing that I love about the tutoring bots is that it doesn't just give the student the answer, it's a conversation, so it's not just like doing a search and getting the answer, it's dialogue back and forth so the students are actually learning.
Sacks-Mandel: Can we discuss AI ethics and the potential for AI risks and how do we even regulate that?
Johnson: One of the things that we focus on is, "How do you overcome and recognize your biases?" People have biases that get into code that's just about preferring the color blue or not liking to turn left while driving, so how do you actually understand and identify those things and govern through them? So we are talking about going back to people as the root cause of everything.
We talk about AI governance and responsible AI. The foundation of it is understanding who you have on your team, and then coming up with a plan to mitigate bias. Once you mitigate it, you have to always continue to check the validation and governance processes throughout. It is always continuous in order to get a high output.
There's a substantial amount of risk that comes with that because right now the speed of the market with AI startups might preclude these steps. How do you establish a governance program?
People also often forget about the cyber security aspect of it. Data security is one of the biggest factors in vulnerabilities. There are some core things that we've already been doing that we learned from the cyber security phase of our own lives that apply to how we establish and mitigate those risks when it comes to responsible AI.
Montgomery: When I think about the ethical side of AI, it's really a separation of personal versus corporate. Personally, you can go out and do particular things with AI models which are not impacting the world, but corporately, when you go out and you begin to buy models from third-party models and you incorporate that or you ingest that into your systems, now you're held accountable for how they developed those models. You just have to be very careful.
We're very specific about the models that we use. There is a lot of data that flows through automobiles. And there are a lot of decisions that we make based on that data, so if we ingest the wrong model to help us make those decisions, we're going to be held accountable. We are held accountable by 50 different jurisdictions plus DC. So, you have to stay up to date on the law. There are people out there who are maintaining pretty high-paying jobs just by staying up to date on legislation.
Dingare: Always keep humans in the loop. You can't just defer to AI and let it figure it out. You have to be careful and understand who built the LLM and what biases they might have had because you're responsible if you use that LLM.
Sacks-Mandel: All LMS are not created equal. Some LLMs are good at one thing and another LLM is good at another thing, so the human has to be a cop directing what is the context and which LLM will provide the best answer.
I know the other point is maintenance, it's continual maintenance. You're going to have to maintain these models or else they're going to degenerate. I think they call that model drift because if it's generating its data it's going to be reinforcing its biases and its challenges.
Read More: Cyber Security Leaders Discuss the Future of the Field: AI, Career Growth and More
Sacks-Mandel: As AI models become more advanced and human-like, how can we bridge this chasm to augment capabilities?
Montgomery: There’s a business opportunity there because we may look at it and say, ‘Oh that's crazy, why would someone upload their entire network topology into ChatGPT?’ but the reality is a lot of people still don't understand the technology and we make assumptions that everybody's knowledgeable and everybody's playing by the same rules. But the reality is people are just tinkering and they don't understand the implications of what they're doing.
Sacks-Mandel: What about talent? With so many new roles that people can play. What are those roles you think that people should be thinking about and moving towards?
Montgomery: When we talk about talent, first you have to just strip yourself of everything you think you know because you're going to be less cautious. You need to step into this field with open eyes. Yes, maybe there's a core technical skill set that could be beneficial, such as math, statistics, or probability. Yet, I'm pretty sure you've seen plenty of cyber startups led by people with zero cyber skills. I would say the number one skill set is curiosity. If you're curious, you're going to seek out the knowledge necessary to be proficient. It's going to lead some of you to play with Python, to be able to articulate your viewpoints on AI so that the higher-end leaders sitting across from you will know that you're not just fluff. But it starts from a curious place. If we put too many guardrails on it, we're not going to go far.
Dingare: It's not just those who are developing the technology who should be considered for their talent. One thing we did in my previous company was to create two different personas - a persona of an expert and a persona of a citizen data scientist. We did a lot of pair programming and pair use cases with an expert on a use case and then a couple of citizen people along with them and that worked well. It amplified the reach of the skill set within the company and also helped us to change the culture because one of the big obstacles in this whole field of AI is change management.
Sacks-Mandel: It goes back to design, thinking about where there is empathy for the end user. What problems are you trying to solve? How could you make this experience more frictionless for the user? The human has to be at the center, and I can't emphasize this enough. We need to keep the people in mind. The change management as you're talking about, it's always about the people.
Sacks-Mandel: Is AI going to stop people from critical thinking?
Montgomery: AI impacts the cognitive ability to just learn, to interpret, to be considerate of the information that you're given, to validate, to prove that it's accurate because you're getting it so quickly. Some of us are going to take it, run with it, put it in reports, and then get fired because of inaccuracies. Most of us are learning to validate carefully.
Sacks-Mandel: That’s a big lift for education. Instead of telling a student to go write an essay on this subject (because they're just going to ChatGPT and get it written), teachers need to say, ‘What's your thesis? What's your hypothesis? What are you trying to prove or disprove? Where are your sources? What's your outline?’ I don't even care if the answer is right or wrong, show me your thought process.