Artificial Intelligence: The Evolving Cybersecurity Impact… | Stephens

Who We Are

What We Do

We provide investment banking, research, sales and trading, asset and wealth management, public finance, insurance, private capital, and family office services.

About Us

We are a family-owned financial services firm that values client relationships, long-term stability, and supporting the communities where we live and work.

The Stephens Story

The idea of family defines our culture, because each of us knows that our reputation is on the line as if our own name was on the door.

Leadership

Our reputation as a leading independent financial services firm is built on the stability of our longstanding and highly experienced senior executives.

Impact Initiatives

We are committed to corporate philanthropy; economic and financial literacy advocacy; and diversity, equity, and inclusion initiatives.

Our Brand Ambassadors

Stephens is proud to sponsor the PGA TOUR, LPGA Tour, and PGA TOUR Champions careers, as well as applaud the philanthropic endeavors, of our Brand Ambassadors.

Making Connections

We host many highly informative meetings each year with clients, industry decision makers, and thought leaders across the U.S. and in Europe.

Our Businesses

Capital Management

We provide fiduciary investment strategies to public-and private-sector institutional clients through asset allocation, consulting, and retirement services.

Fixed Income Sales & Trading

Decades of proven performance and experience in providing tailored fixed income trading and underwriting services to major municipal and corporate issuers.

Institutional Equities and Research

Proven industry-leading research, global market insights, and client-focused execution.

Insurance

Customized risk management, property & casualty, executive strategies and employee benefits solutions that protect our clients over the long term.

Investment Banking

We assist companies with accessing capital through innovative advisory and execution services that help firms achieve their strategic goals.

Private Capital

We have been a trusted and reliable source of capital for private companies for over 70 years.

Private Wealth Management

Our experienced Private Client Group professionals develop customized investment strategies to help clients achieve their financial goals.

Public Finance

We are a trusted municipal advisor with proven expertise in public financings. We also work with clients in negotiated and competitive municipal underwritings.

Market Trends

Artificial Intelligence: The Evolving Cybersecurity Impact for Organizations

Mar 25, 2024

Artificial intelligence (AI) has become an integral part of our day-to-day lives, and is poised to significantly grow in importance during the next few years. This presents opportunities and challenges for organizations, as well as their workers and customers.

AI can be found in everything from virtual assistants like Amazon’s Alexa and Apple’s Siri, to the algorithms that learn our behavioral patterns on social media, to self-driving vacuums and cars. Businesses in a wide range of sectors are transforming their operations by using AI capabilities in automation, data analysis, personalization, customer service, and beyond.

As AI continues to evolve, so does an array of cybersecurity risks. Organizations that seek to avoid financial and reputational damage have great incentive to implement artificial intelligence ethically and securely, maximizing its benefits while minimizing the potential risks and legal exposure.

Incorporation of NIST Framework

Cybersecurity professionals can work with their organization’s compliance teams to take advantage of research and other resources provided by the National Institute of Standards and Technology (NIST). The U.S. federal agency recently released a framework for artificial intelligence that contains a set of guidelines and best practices to help manage and secure AI systems.1

The NIST framework provides a structured approach to assessing and managing AI-related risks, with the goal of helping organizations to promote transparency, trust, and accountability in AI development and deployment. The framework consists of four components: Governance, Data, Development and Operations, and Performance and Monitoring.

It helps organizations identify and mitigate potential risks associated with AI technologies, such as data privacy, security vulnerabilities, bias, and performance issues. The framework also aligns with existing standards and regulations related to AI, such as data protection laws and industry-specific guidelines. This will help organizations remain compliant with legal and ethical requirements.

Although the NIST AI framework does not provide a quantitative measurement or scoring system for its implementation, organizations can develop key performance indicators (KPIs) that align with the framework’s components. Organizations can track metrics related to data quality, model performance, system reliability, security incidents, and compliance with governance policies outlined in the framework. By measuring these metrics over time, organizations can assess and quantify the effectiveness of the AI systems they implement and identify areas for improvement.

AI Risks In Your Vendor Ecosystem

Just as organizations weigh the risk and reward of using AI internally, they also should extend such due diligence to their vendors. The first place to start is with the organization’s own AI policy. Organizations that implement a formal AI policy will have a better understanding of their own risk profile due to vendor relations.

Some companies have created a task force dedicated to responsible AI usage within the enterprise and how to assess AI usage with business partners. An organization’s legal team can help with contract clauses and other protections for a proactive strategy on how vendors use AI. The language of these measures should address data ownership, access, and retention.

Proper AI-related risk management also calls for a detailed review of your vendor’s security practices, data handling procedures, and compliance with applicable regulations. Look for language on data and privacy commitments, breach notification procedures, and liability clauses. Vendors should be able to demonstrate compliance with regulations (GDPR, HIPAA, HITECH, etc.) through their certifications, security assessments, audits, and other means.

Performing regular audits and assessments on your vendors will help verify that they are following best security practices and meeting contractual obligations. These strategies will assist the task force, and contribute to the overall incident response plan with vendors when the organization must respond quickly and effectively to a vendor security incident. It is important to have an established and clear security policy, continuous monitoring, regular contract reviews, and ongoing communication with vendors to ensure an open and secure partnership.

Detection and Response

Organizations that intend to remain competitive will implement AI-driven tools with new capabilities, along with the expertise of their security teams and AI professionals, to better understand and protect against the evolving risk landscape. Cybersecurity has leveraged AI for decades, and the trend has accelerated in recent years. AI is now found in a plethora of cybersecurity tools, helping to enhance threat detection, response, and overall security of systems, networks, and data.

On the other end of the spectrum, cybercriminals have started exploiting AI to usher in more sophisticated and targeted attacks, triggering an uptick in AI-powered malware, deep fake technology, and advanced phishing schemes. Companies can address these challenges by taking both offensive and defensive approaches to cybersecurity while leveraging artificial intelligence.

Various AI technologies can detect and respond to threats in real-time, as well as allow for custom training to your risk environment in order to identify and counteract malicious activity. Here are three of the most common examples of AI-driven cybersecurity tools:

  1. Endpoint Detection Response (EDR): AI-powered EDR can perform threat detection, automated incident response, detailed forensic analysis, and generate reports that include insights into the security posture, risk trends, and response effectiveness of the organization.
  2. Intrusion Detection Systems (IDS): AI in an IDS allows enhanced capabilities for identifying and responding to cyber risks. AI algorithms are established to a baseline of normal network or systems activity, and provide alerts when deviations occur. With AI machine learning capabilities, the IDS can learn from new behavior patterns that help it to improve accuracy of detection. AI-enhanced IDS also can utilize threat intelligence threads, trends, and databases to enhance their detection capabilities.
  3. Security Information and Event Management (SIEM): These systems can use AI to analyze and correlate previous security events and logs from a variety of different sources to identify and respond to potential security events.

In addition to incorporating such tools and maintaining diligence around them, organizations should foster enterprise-wide collaboration, with the guidance of AI experts and cybersecurity professionals, on how to apply risk management best practices that are specific to your organization’s software. This includes a thorough and robust approach to conducting regular security audits and assessments to identify, assess, and remedy potential vulnerabilities, as well as implementing the data privacy and security measures that evolve with the organization’s findings.

Coming Full Circle With Your Cyber Insurance Coverage

Cyber insurance policies generally cover a range of risks related to cyber incidents, such as data breaches, network outages, ransomware attacks, and cyber extortion. We have yet to see specific carriers that we partner with take a stance on AI-related risks through policy language. However, existing policy language is expected to evolve for AI-related risks and in the relative near future we may see coverage variations depending on the specific policy and insurance provider.

Our carrier partners often lead the discussion in the claims environment, and the shift in focus to AI-related claims has been a topic of frequent conversation. One of the largest cyber insurers in the market, Beazley, is closely watching the regulatory landscape as it pertains to AI and data protection, consumer protection, privacy, litigation around intellectual property, and guidance for AI developers and users.2

AI-related cyber claims have yet to pour in, but carriers anticipate that will change. Most are hyper aware of the potential exposure lurking in the background as the future remains largely unclear. Increasingly sophisticated phishing emails, deep fake videos, and exposure to third-party cloud and security providers that could lead to systemic attacks are all top of mind.

Until carriers actually see their claims experience impacted by artificial intelligence incidents, coverage will likely be slow to change. For now, carriers continue to provide coverage for certain aspects of AI-related risks, such as data breaches or cyber-attacks that result from AI system vulnerabilities. Stephens will continue to carefully review cyber insurance policies and consult with our carrier partners, so our clients will know of any coverage intent changes.

As organizations utilize AI in ever more innovative ways, they should consider purchasing specialized insurance coverage for AI-related risks, such as errors and omissions (E&O) insurance or technology errors and omissions (tech E&O) insurance. These types of policies are specifically designed to protect businesses against liability claims arising from E&O or negligent acts related to the use of technology, including AI systems.

Assessing an organization’s exposure from AI-related risks begins with a proactive approach. Organizations that evaluate their current use of AI technologies with a thorough review of potential risks and vulnerabilities will be able to apply the appropriate risk management strategies. Stephens stands ready to help organizations mitigate the risks associated with AI adoption and enhance their overall cybersecurity posture.

About the Expert

Viviana Abbasi

Assistant Vice President, P&C Senior Account Executive, Insurance

Read full bio
  1. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
  2. https://www.beazley.com/en-US/articles/2024-an-outlook-on-artificial-intelligence