Skip to content

How AI and Data Management has Improved Cloud Computing

How is cloud computing evolving beyond traditional infrastructure to become the backbone of AI development, data management, and emerging technologies like AGI? 

Join our panel of industry experts for a practical dive into the technologies and strategies reshaping cloud architecture, DevOps, and network design. Panelists include: Mandi Walls, DevOps Advocate at PagerDuty, Sachin Sharma, Lead Director at CVS Health, Shilpa Shastri, Principal Product Manager, Data and Insights at Apptio, and Ramesh Pateel, Director of Software Engineering at Wasabi Technologies. 

Below are some highlights of the conversation. Watch the full discussion here and make sure to stay up to date on all of Tech in Motion's upcoming events, both virtually and in-person!

Mandi Walls: What cloud computing trends are top of mind for you this year? 

Sachin Sharma:  Nowadays, it's all about AI. Whatever modernization we’re doing is AI-related. The main focus is on making AI products, tools, and technologies accessible to everyone within the firm to truly leverage the power of AI. So we’re building out platforms using cloud advancements to make these tools available firm-wide, not just to individual teams. 

Multiple applications are already leveraging cloud-based AI technologies. We also focus on making data available for real-time analytics, helping business users gain insights on the fly. 

Finally, we’re integrating all our AI/ML platforms into a unified solution rather than teams building in silos. This centralization increases scalability, efficiency, and time to market. 

Shilpa Shastri: AI/ML is creating a massive opportunity for companies to move away from on-prem and toward cloud. Two trends stand out to me: 

First, the explosion of data needed for AI/ML development. Cloud helps manage data securely across regions and scales. That alone is driving more cloud and multi-cloud adoption. 

Second, I see companies maturing beyond basic cost visibility. They're moving into FinOps maturity, not just reviewing billing files but understanding how to optimize cloud spend as adoption accelerates. 

Mandi Walls: Eventually, we’ll be asking the AI how to optimize our cloud spend anyway, right? 

Ramesh Pateel:  Coming from a platform engineering and operations background, I want to highlight the rise of AIOps. Everyone wants to adopt AIOps and drive operations using data-driven or AI-based methods. 

There’s a big push for operational efficiency and product availability, getting as close to 100% uptime as possible. 

We’re also seeing renewed investment in edge computing. It’s becoming crucial with the growth of IoT and AI. How quickly can you process and respond to data at the edge? That’s the question. 

Additionally, multicloud is growing, but hybrid cloud is booming as well, especially in large corporations where standardizing cloud operations is a key focus. 

It reduces duplication and accelerates time to market, which ties back into Sachin’s earlier point. 

Read More: Tech Leadership Strategies for Scaling Growth, Talent & AI

Mandi Walls: While the ratio of cloud types used hasn’t changed much, the overall "pie," aka total spend, has gotten bigger. Organizations are still relying on a mix of public cloud, managed services, and on-prem data centers. What’s driving this trend? How are companies handling hybrid and multicloud strategies? 

Shilpa Shastri: In my own work, I manage a product called Cloudability, which helps enterprise customers manage their cloud costs. Most of our customers—over 75%—are hybrid or multicloud. 

I recently saw a Gartner report that said over three-quarters of organizations on the cloud prefer either a hybrid or multicloud strategy. The main drivers are vendor lock-in concerns and the desire to use best-of-breed solutions. For example, Google Cloud is excellent for AI/ML and data analytics. AWS has different strengths, and Azure is often preferred for database workloads. So, enterprises want flexibility. It also improves cost optimization and even helps with security. 

Multicloud gives customers more control and better options. It’s becoming the standard, not the exception. 

Mandi Walls: Yeah, and it creates new opportunities, but it also requires a wider range of expertise within teams. The clouds don’t plug in one-to-one. You need to know the inner workings of each vendor, which increases complexity. 

Shilpa Shastri:  Totally. Even how vendors bill you differs across platforms. You think one thing works a certain way on AWS, but it’s completely different on Azure or GCP. 

Sachin Sharma: When cloud started gaining traction, organizations were focused on scalability, optimization, and security. But for large enterprises, migrating isn’t easy. You have to ask: Is it a lift-and-shift, or are we truly modernizing? 

If you’re just lifting and shifting, you’re probably not leveraging the full potential of the cloud. True modernization means redesign, and that takes significant effort. That’s why so many companies are still hybrid. They're in the middle of a long journey to full cloud adoption. 

With multicloud, no one wants to be locked into one vendor. If something goes wrong, costs, outages, anything, having flexibility helps. But we also need smart strategies. Not every application needs the cloud’s scale. Some apps are simple. Others are small. You have to weigh the ROI. Choose wisely: what should move to the cloud, what should stay hybrid, and how can you design cloud-agnostic solutions? Tools like Databricks, Snowflake, and Kubernetes help make that possible. 

Ramesh Pateel: I'd like to add to that on hybrid cloud, especially now that we're in an AI-driven engineering era, data becomes foundational. We’re generating so much data that no data is useless anymore. 

But storing all that data in the public cloud is expensive. That's why many enterprises are adopting hybrid strategies, keeping large data sets on-prem or in private data centers and using public cloud more like an extended compute edge. 

Customers use lower-cost storage or keep data private, then rely on the public cloud for compute. Solutions like AWS Direct Connect help bridge those environments. It's a model that supports high performance, scalability, and cost savings. 

New call-to-action

Mandi Walls: Let’s jump to another key topic: Cloud migration. We’ve talked about lift-and-shift versus rearchitecture. But are there still common pitfalls you see today? Has AI or platform maturity helped teams avoid past mistakes? 

Ramesh Pateel: In large organizations, one of the biggest pitfalls is a lack of standardization. 

Teams often start their cloud journey independently: one product, one pipeline. Over time, this results in siloed infrastructure. 

Years later, leadership tries to impose standardization across the organization. But that’s tough when you already have live production workloads running in wildly different environments. 

Governance is another challenge. How do you enforce it across old and new systems? Cost control is also major. Cloud isn’t cheap if you don’t plan it properly. Teams often move quickly, without fully understanding the long-term cost implications. To overcome these pitfalls, we see a few patterns:  Design first. Spend more time upfront on architecture and planning. Shift left. Move cost and security checks earlier in the SDLC. Policies as code. Build governance, security, and FinOps into infrastructure code from the start.  

Sachin Sharma: Everyone talks about optimizing cloud spend, but when teams first migrate, they often don’t know what the bill will actually look like. 

People want to experiment, but they don’t always have visibility into the cost impact until it’s too late. That’s where tools like Cloudability and planning with cloud providers can help. Understanding cost implications before implementation is key. 

Second, standardization. Early migrations often move app-by-app. Later, companies try to build centralized platforms to integrate everything. That’s backwards. Ideally, you design the framework first, then migrate in a structured way. 

Third, operations. Business needs don’t pause for your cloud migration. If it’s a multi-year effort, you can’t freeze development on-prem. You may need to build and maintain systems in parallel, both on-prem and in the cloud. Eventually, you do a controlled cutover, with both systems running in parallel for validation. It’s complex, but necessary for large-scale modernization. 

Read More: Tech for All: The DEI-centered Approach to Building Cloud-Based Tech Products

Ramesh Pateel: Cloud security is still a challenge for both small and large organizations. Security must be baked into the SDLC. It shouldn’t be an afterthought. Unfortunately, early cloud transformations didn’t always prioritize this. But starting around 2020–2021, we’ve seen a shift. Security is now moving to the beginning of the process—security by design, not just security in operations. That includes everything: software, network, and platform security. 

Mandi Walls: How is generative AI and agentic AI even reshaping cloud infrastructure and app design? What are the big changes you’re seeing? 

Shilpa Shastri: Generative AI is fundamentally transforming both infrastructure and application design. Cloud providers are adapting fast to meet the compute demands of AI, offering new instance types with more GPUs, memory, and specialized hardware like TPUs. 

There’s also a push toward edge-cloud hybrid architectures for latency-sensitive applications. Smaller, optimized models are being deployed closer to users for real-time response. Data pipelines are evolving, too. Generative AI requires massive, sophisticated data systems, streaming, pre-processing, embedding, and vector databases. 
We’re using tools like Databricks, Snowflake, and others to support these needs. 

Cost is another big factor. Inference is expensive, so companies are optimizing aggressively: intelligent caching, efficient querying, and right-sized infrastructure. 

And finally, security and governance. AI adds new privacy and access challenges. We’re seeing more specialized cloud security layers for things like model protection and output validation. 

New call-to-action

Mandi Walls: Are you seeing teams choose smaller models specifically to manage those infrastructure and cost concerns? 

Shilpa Shastri: Absolutely. Model choice is a key architectural decision, and is one that should happen early in the design process. You can't just copy what competitors are doing. The cost, performance, and suitability for your use case all need to be carefully evaluated upfront. 

Sachin Sharma: Cloud is democratizing AI. With all the advancements, organizations can experiment quickly: try, fail, learn, and improve. 

And data is critical. Cloud enables real-time pipelines and event-driven architecture so models can learn and adapt fast. We’re no longer stuck in batch mode. It’s about live, interactive insights. Within our firm, we’ve deployed models as APIs so teams don’t have to build their own. They can use OpenAI or internally hosted models through a shared framework. It’s fast, secure, and consistent for everyone across the organization. 

Ramesh Pateel: Generative AI is here to stay, and it's evolving fast. One of the biggest challenges is building the right data pipeline. Without solid, structured, and vectorized data, your models won’t perform well. That’s why cloud providers are investing in tooling to help teams build these pipelines efficiently. 

Another major shift is in model selection. There’s growing interest in tools and frameworks that let you pick models based on your use case, whether it's open source, proprietary, or fine-tuned. Every major vendor is in the game. And every software company is trying to integrate AI in a way that complements these models. 

Mandi Walls: What’s next for cloud? Are there any “under-the-radar” technologies you think could become major trends? 

Shilpa Shastri: First, I think edge computing will continue to grow, especially as agentic AI needs low-latency responses and local inference. But if we’re talking “next wave” beyond AI, I’d say quantum computing. 

It’s still emerging, but a lot of cloud providers, such as Amazon, Google, and IBM, are investing in research and experimentation. Quantum will open up possibilities for cryptography, financial modeling, and supply chain optimization. 

Read More: Why Leaders are Making Data Privacy and Trust a Priority

Mandi Walls: What kinds of projects do you think quantum computing will enable? Any “killer apps” you’re seeing? 

Shilpa Shastri: Cryptography is a major use case; quantum can break current security models, so we need new standards. Financial modeling and complex optimization—things that are too computationally heavy for traditional systems. 

Sachin Sharma:   From a security perspective, quantum could change everything. Hackers will be able to break encryption much faster, so we’ll need new protections. But in the near term, what I see is AI being embedded into every cloud tool and service.  Even BI tools now have natural language query capabilities: ask a question, get insights instantly. It’s becoming the default layer across enterprise technology. 

Ramesh Pateel: One under-the-radar shift is simplifying multicloud interoperability. Right now, it’s hard to stitch together services from AWS, Azure, and GCP. That will get easier with better inter-cloud connectivity and neutral platforms. 

Another is edge computing, especially for AI APIs. We’ll see agentic AI running at the edge for faster decision-making and response. Lastly, cost transparency. Vendors will need to simplify billing, no more hidden charges for transfers, storage, and usage. One price, one story. That’s what customers want. 

Mandi Walls: What skills do you think are most important for cloud and AI-related roles going forward? 

Shilpa Shastri: Whether you're early in your career or mid-career, the foundation still matters, and basic computer science and coding skills are a must. But beyond that, I’d strongly recommend focusing on data science and data analytics. Everything being built today, cloud-native or not, relies heavily on data. Enterprises are collecting more data than ever, and they need people who can clean, model, and extract insights from that data. 

From a tooling perspective: Learn Python, understand how to work with large data sets, explore platforms like Databricks, Snowflake, or even modern BI tools. Also, keep an eye on where AI is going. Yes, AI might take over some coding tasks. So don’t just focus on code, learn to ask smart questions, evaluate results, and think critically. 

Product thinking and user awareness will matter more than ever. The future isn't just about writing code. It's about guiding machines to solve meaningful problems. 

Ramesh Pateel: For those just starting out, consider security. It’s not going anywhere. Whether it’s cloud security, data privacy, or AI governance, those are roles that will remain essential. For those mid-career, learn to use AI tools efficiently. They won’t replace you, but they’ll make you more productive. 

AI will help us do more, faster. That opens the door to more innovation, more experimentation, and more startups entering markets that used to be dominated by large players. So, whatever your role, developer, analyst, or engineer, become someone who uses AI to improve outcomes. That’s where the future lies. 

Shilpa Shastri: I agree that data engineers will continue to play a critical role. Enterprise pipelines are extremely complex, and I don’t see AI replacing that kind of foundational architecture work anytime soon. It’s not just building the system; it’s understanding all the edge cases, scale challenges, and integration points. 

Sachin Sharma: I’ll add that the way engineers work is changing fast. With tools like Copilot and Cursor AI, you don’t need to handwrite every line of code anymore. But that doesn’t mean your job is done. You still need to understand the code, validate it, test it, and troubleshoot it. You’ll run into issues, and you need the skills to fix them. These tools are accelerators, not replacements. 

Ramesh Pateel:  I’ll touch on platform engineering since that’s my background. Platform engineering is evolving into intelligent platform engineering. 
We’re building systems that monitor themselves, patch themselves, and scale on demand, especially with AI support. CI/CD pipelines may become part of the platform layer itself. And operations are shifting toward observability and AIOps, systems that tell you what’s wrong before it breaks. Platform engineering isn’t going away. It’s getting smarter.