All the sessions from Transform 2021 are available on-demand now. Watch now.
As consumers, we’ve widely welcomed artificial intelligence and machine learning into our daily lives. “Smart” speakers, facial recognition on our phones, targeted ads we love to hate — these are just some of the AI-powered technologies all around us.
But inside companies, where AI holds virtually incalculable benefit in a range of use cases — such as hyper-efficient and productive IT, supply chain automation, and increasingly intelligent cybersecurity ecosystems — the status of adoption is more of a mixed bag.
In a recent survey of 700 IT pros across the globe, a whopping 95% said they believe their companies would benefit from embedding AI into daily operations, products, and services, and 88% want to use AI as much as possible.
In the trenches, IT staffers see AI as a way to help them do their jobs faster and better, and they’re gravitating toward it as naturally as consumers have gratitated toward smart speakers at home.
However, a mere 6% of C-level leaders who responded to the survey reported actual adoption of AI-powered solutions across their company.
That’s a yawning gap to say the least, but it has validity. In my conversations with other CIOs, I hear all the time that, as so often happens with new technologies, the C-suite is wrestling with a variety of challenges — some technical, some organizational — in marching forward with AI.
IDC predicted recently that worldwide revenues for the AI market, including software, hardware, and services, will climb 16.4% this year to $327.5 billion and will break the $500 billion mark by 2024. Much of that growth will come from enterprises. So, clearly, broader AI adoption inside companies isn’t a matter of if but when.
Why is there so much challenge to adopting AI and making it stick, then? An AI implementation strategy has many moving parts, and no doubt some companies feel overwhelmed by what may feel like multi-faceted obstacles to adoption. But, in fact, riding the AI wave doesn’t have to be that hard. Kick-starting AI efforts is a lot easier if companies can ask and answer four key questions.
1. Are we focused and intentional?
AI is too big and important to go into half-hearted. It can’t be treated as just another to-do list item executed off the sides of proverbial desks, its attention often stolen by seemingly more pressing near-term priorities. Companies must be really intentional about AI; they have to adequately fund it, unabashedly devote some of their smartest people to it, and recognize that the journey won’t be easy.
CIOs have a huge role to play, but they can’t do it on their own because so many of the challenges around AI go beyond their scope of influence. It helps mightily if a critical mass of two or three top executives, including the CEO, personally commit and drive the rest of the company toward AI as a critical piece of its future.
If that doesn’t happen, I expect boards of directors to increasingly push company leaders to show momentum in their AI initiatives. Better that top executives seize the reins first.
2. Are we finally prepared to tackle the data challenges?
One of the most significant hurdles in AI adoption is coming to grips with all the integration challenges and technology upgrades required for AI-ready, cloud-based infrastructure stacks.
According to an IDC report, enterprises typically spend “around one third of their AI lifecycle time on data integration and data preparation vs. actual data science efforts, which is a big inhibitor to scaling AI adoption.”
In many ways, AI inherits the data and analytics challenges that companies were facing before we started calling it AI. Since many companies haven’t yet resolved those challenges, layering AI on top can be problematic.
For example, data that resides in the marketing department may be stored on different systems and have different formats and quality than data in the sales department. That’s a problem for AI applications that need consistent data across the functions.
Companies must acknowledge they’ll need the right infrastructure for centralizing and expediting the work of getting all this data in AI-ready shape, without impacting the insight-yielding data science that each function may have independently undertaken. Fortunately, the technology to make this easier exists.
3. Have we thought through the people effect?
Aside from the technology factors, it’s critical for companies to make sure they have a workforce with the right skills to support AI. This is a complex topic, for certain, but let me first address the question always on people’s minds around AI: Will it take away jobs?
This is often framed as an “either/or” argument — either the machines have the jobs or the humans do — but I think the reality is far more nuanced.
Many IT teams are filled with creative thinkers and problem-solvers who find themselves constantly pulled into the mire of mundane, routine work. Thanks to automation, their energies can be unlocked. Thus, AI’s biggest value is not necessarily only making life easier for IT staffers (perhaps one of the more common use cases currently, but not biggest value). It’s about enhancing the potential of all employees by taking away rote tasks or solving problems that humans can’t solve at scale.
What about people who are capable only of performing the routine tasks to be automated? For them, AI is a real threat, but also an opportunity. Here’s why: Companies will face extreme competition for the limited talent that can construct/operate AI solutions. Thus, it is in their interest to re-train existing employees as much as possible. A win-win: The employee acquires vital new skills, and the company doesn’t have to look outside for new hires.
4. Is our governance and security house in order?
Cross-functional and executive involvement in the oversight of reputational, operational, and financial risk associated with AI is crucial for successfully deploying AI. For AI to be trustworthy, bias in data must be mitigated. Whatever a company does with AI has to meet its own business and ethical standards. It must also comply with a growing number of governmental regulations.
Though AI governance is still in its infancy, as a KPMG report put it, “leading organizations are addressing AI ethics and governance proactively rather than waiting for requirements to be enforced upon them.”
Another core issue is security, where AI models raise unique considerations. In standard software development, source code repositories are secured. But the data used in AI models sits outside that ecosystem. This demands that organizations broaden their security strategies and practices to account for the uniqueness of AI development.
By answering the four questions outlined above, companies can remove the fear, uncertainty, and doubt around AI and begin enjoying the benefits of a truly game-changing technology. Jump in — the water is warm.
Sharon Mandell is Senior Vice President and Chief Information Officer at Juniper Networks.
VentureBeat
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
"company" - Google News
August 01, 2021 at 08:21PM
https://ift.tt/37aP5wD
4 conversations every company needs to be having about AI - VentureBeat
"company" - Google News
https://ift.tt/33ZInFA
https://ift.tt/3fk35XJ
Bagikan Berita Ini
0 Response to "4 conversations every company needs to be having about AI - VentureBeat"
Post a Comment