About the Guest Jeevan is passionate about applying Data Fabric, Artificial Intelligence (AI), and Machine Learning (ML) to make a step change improvement in business operations. He has a deep fundamental belief that human-in-the-loop workflows built by cross-functional teams supported by hybrid cloud technology are critical to the trusted adoption of AI within an enterprise. He is currently a Partner at IBM Consulting, leading the AI & Analytics practice for Healthcare, Life Sciences, and State & Local Government clients. Before IBM, Jeevan led AI/ML market development at Amazon Web Services (AWS), and before AWS, spent over a decade in the data science consulting practice at Deloitte Consulting. Connect with Jeevan Duggempudi Key Takeaways One of the benefits of the architecture of decentralized data is that it allows for domain-specific ownership and stewardship of the data, which reduces the time spent managing differences between operational data planes and analytical databases. Decentralization of data is a trend within enterprises, and there are several reasons for this. One reason is that data is becoming more valuable, and organizations want to maximize its value. The data mesh is a platform for self-service data infrastructure that enables interoperability across different domains. The principles of the data mesh include domain-agnostic functionality, standardization of data products, and self-service data infrastructure. Much effort is being put into trustworthy and responsible AI practices aimed at reducing bias and improving fairness. Quote “One of the core fundamental beliefs that organizations are going through right now is data is the new currency within the organizations and how do you maximize the value that you get out of date. So through that and this data architecture is really where companies see the value that can be derived at scale within these enterprises.” – Jeevan Duggempudi Highlights from the Episode What are some of the common challenges you are seeing with enterprises in driving insights within their organizations? There are several challenges that enterprises face when scaling their initiatives. This can be categorized into three distinct buckets: non-availability of quality data, AI scalability, and talent. Having “bilingual talent” within teams is fundamental to driving better insights. The goal is to combine technical and functional talents and provide them with relevant resources to work on specific challenges. How are enterprises addressing these challenges to scale their data and AI initiatives? Even during these recessionary times, companies are making investments in digital initiatives on data and AI. Now, AI is used as a tool for pricing, marketing, sales, and so much more to achieve a competitive advantage. Other than this, companies are also investing in leadership roles to support these initiatives Can you talk a little bit about why your seeing enterprises moving to Decentralized Data Lakes through Data Mesh Architecture these days? What is the benefit of this Decentralized Mesh Architecture approach for delivering data-driven solutions within an organization? When organizations realize the value of the analytical data, they realize the value of decentralized data. As central organizations start to scale across the various functions within the organization, the bottleneck starts because of spending too much time managing the differences between operational and analytical data planes. This withholds time for analytical data teams to dive deeper into the data to serve insights to the business stakeholders faster. Enterprises are moving to a decentralized data lake through data mesh architecture to overcome the challenges of centralized data management. This shift is due to the domain-specific knowledge and data ownership by function-specific data teams, allowing for faster insights and better decision-making. What trends are you seeing in data science and AI today? Decentralization of data is a trend within enterprises, and there are many benefits to this approach, including making data more accessible and trusted. One way to achieve this is to decentralize data sources within a mesh architecture, which allows different parts of the organization to access and use data productively. Another benefit is that domain knowledge can be brought into data processing, leading to more efficient and effective use of data. How are the delivery models evolving for data science and analytics to drive value within enterprises? Advances are driving value in trustworthy and responsible AI practices, which aim to improve the AI models’ bias, fairness, and transparency. Data science communities are working on ways to improve delivery models and identify value within enterprises. Through virtual regions, it is possible to set up quick operational improvements. An example is how a client’s call volume spiked up by 60-70% in just a week due to the use of cognitive care tools, which IBM was able to deploy quickly. This caused the average call time to increase from a minute to 10-15 minutes, and many calls were contained within five minutes or less. This demonstrates the value of using cognitive care tools and shows how IBM can rapidly deploy solutions to help customers improve their customer experience.
Play Podcast Episode Podcast B2B Data Exploring the Role of AI and First-Party Data in the Hospitality Industry Sunny Side Up