siracon 22 Session abstracts

 Keynote Session Abstracts

SiRAcon 22 Sponsors

With out the generosity of the community, events like this are not possible.

Sponsorship opportunities are still available



Bronze Sponsor


Friend of Sira

A Grand Unified Theory of Enterprise Risk Management

Graeme Keith

Risk management is the coordination of activities to direct and control an enterprise with regard to risk, where risk is understood as the effect of uncertainty on objectives. Thus, the essence of Enterprise Risk Management is to address the questions: What are our objectives? What do we direct and control that influences those objectives? What are the uncertainties that mediate that control and that independently influence those objectives? How do our decisions and controls interact with these uncertainties and how do those uncertainties interact with each other? How do our decisions regarding the things we control cascade down stochastic causal chains in this network of uncertainty and influence the objectives we are trying to optimize?

To manage risk, we must understand how decisions impinge on objectives so that we can make decisions that optimize those outcomes. Because our outcomes are uncertain, we have to frame our objectives in the language of uncertainty. We aim to maximize upside potential while minimizing downside exposure, while recognizing that at least in competitive situations upside potential entails downside exposure (though the converse is, unfortunately, not necessarily the case).

From this point of view, Enterprise Risk Management is nothing more than a stochastic optimization problem, and in this presentation, I will argue that this perspective helps us to reconcile and align apparently disparate approaches to managing risk. We adapt the models and control paradigms we use to manage risk to the scope and the complexity of the management task we are addressing. Scope refers to scale – local, regional or global - and to time – short-term, medium-term or long-term as well as to hierarchical depth. Complexity has two components. On the one hand, it is concerned with residual uncertainty relative to decisions to be made, spanning from simple “no-brainers” over careful balance of cost with benefit and risk with reward, through to long-term fundamental strategic realignments. On the other hand, complexity describes the nature of the evolution of systems under consideration: deterministic versus stochastic, stationary versus trending, linear versus non-linear, stable versus unstable.

By considering best risk management practice along these two dimensions - scope and complexity – we see that many of the apparent dichotomies in risk management are simply a question of “horses for courses” – selecting the most appropriate paradigm for the application at hand. And distinctions such as qualitative versus quantitative, predict-and-plan versus monitor-and-adapt disappear. We also understand some of the most spectacular failures of risk management practice are due to paradigms entirely appropriate for the specific applications for which they were developed being forced into applications with scope and complexity that calls for different approaches: qualitative methods applied to cost-benefit and risk-reward decisions; Monte Carlo methods applied to non-stationary or structurally unstable systems.

The presentation will delineate the axes on which this atlas of risk management practice are mapped and then provide a guided tour.

Visualizing Our Uncertain World

Lace Padilla

We live in an uncertain world. From extreme weather events to pandemic forecasts, every day, we are confronted with uncertainty. Unfortunately, uncertainty is highly challenging for both the general public and trained experts to understand, which is why effectively conveying uncertainty in scientific findings is critical. Visualizations afford thinking with such complex data, as they capitalize on the visual system's highly advanced pattern recognition system to process vast data sets at once. This efficient processing is in stark contrast to the limitations of sequential reading required by sets of symbolic numbers. This talk will discuss state-of-the-art uncertainty visualization techniques and the cognitive processes that can lead to misunderstandings of data with uncertainty. We will discuss best practices in information visualization to support researchers' awareness of how visualization choices influence their audience's understanding of data, supporting informed and ethical decisions about conveying statistical results.

Articulating uncertainty-at-source

Christian Wagner

Uncertainty is pervasive, from sensors in engineering applications, to human perceptions—whether consumer preferences; patients’ assessment of pain; or cyber security experts’ assessments of vulnerability. Whether arising from vagueness, lack of knowledge, or variability, appropriate handling of such uncertainties depends upon four main stages: capture, modelling, appropriate inference and above all, communication of insights gained to decision makers.
In this talk, I will focus on uncertainty at the level of individual data sources, highlighting the difference between intra- and the more commonly considered inter-source uncertainty. Focussing on the uncertainty arising from people, I will discuss recent work on capturing and analysing uncertain information efficiently and effectively using an interval-valued response format. Drawing on application examples, I will emphasise why capturing and modelling uncertainty-at-source is valuable if not crucial, before highlighting both recent advances and some of the remaining challenges, ranging from human factors, to gaps in available inference techniques.

This talk is intended to give attendees a practical introduction to the use of intervals for handling uncertainty in data. No prior knowledge is expected, and I will discuss relevant types and sources of uncertainty before proceeding to review and demonstrate practical approaches and tools that enable the capture, modelling and analysis of interval-valued data. The talk will provide attendees with an end-to-end overview and concrete starting point for considering intervals within their own work, including highlighting practical tools available today.

How to (indirectly) measure things in cyber security
--Ben Edwards--

Measurement is the first step to making organizations more secure. Unfortunately, we can't always measure the exact quantities we'd like to. In this talk we introduce a suite of statistical methods generally referred to as "Latent Variable Models" with real world examples to address this challenge.

Better Than Beta: Finding the PERT-fict Fit for Event Frequency
--David Severski--

Estimating event frequency is a critical piece of risk analysis. BetaPERT is often used to model expert opinion, but is a poor fit for real world data. We'll use real incident data together with data science to understand the strengths and weaknesses of this distribution, and how to make it better!

Test-Grey

Crisis evolution: Evaluating Risk Across Multiple Phases of a Crisis
--Kaitlyn Webster--

Decisionmakers and analysts often need to be able to understand and predict not only the risk of an event but also how it might unfold. Will a minor crisis escalate? Improve? Backslide before being resolved? Multistate survival modeling allows analysts to quantify risks at each stage of a crisis.

Measuring a Risk Concept: Exploitability
--Jay Jacobs--

Measuring the concept of vulnerability exploitability has traditionally been limited to expert opinion. We are changing that with the Exploit Prediction Scoring System (EPSS). Learn how we've improved measurement by collecting real-world data, using modern analysis and mixing with domain expertise.

Introduction to the Field of Expert Forecasting and Human Predictive Intelligence

--Karen Hagar, and Brian Labatte-- 

It has also been proven; humans still beat artificial intelligence (AI) machines (2018, IARPA Human Hybrid Forecasting Competition). If you want to know how they do this, what skills are needed, and how to develop these skills to mitigate the future, these professionals will provide some insight.

Making R work for you (with automation!)

--John Benninghoff --

In my journey to learn R, I found that publishing my work and ensuring code quality was more tedious than the analysis itself. In this talk, I'll demonstrate a 'DevOps pipeline for R' - automation I created that reduces toil and helps me spend more time on the analysis work I enjoy most.

Test-Grey

Introduction to Quantitative Risk Analysis

-- Apolonio "Apps" Garcia--

Join us for this pre event session on introduction to quantitative risk analysis. Think of this as a 100 level introduction.

Practical Tips to Enhance Your Data Visualizations

--Jon Sternberg--

Risk analysis is particularly powerful when paired with effective data visualization and storytelling. However, too often the design of our reports and dashboards detracts from the important messages we want to share. In this session, we will explore several easy-to-adopt good practices for data visualization that will pack an extra punch for your target audiences. We will use a popular business intelligence tool, Power BI, to demonstrate the concepts, and you will walk away with practical tips to enhance your next data visualizations.

Uncertainty Quantification in Cyber Risk Data Modelling using an Exploit-Explore Methodology and Monte Carlo Simulations

--Serban Pop -- 

We present a novel uncertainty-aware model that quantifies the cyber risk data quality issues and focus on its contextual representation by means of balancing the exploit-explore relationship through decision tree simulations, thereby increasing the system's performance and its predictions.

The Future of Cyber Insurance

--Steven Schwartz --

This talk will review current market conditions post-2020, why there have been systemic premium increases above 100-200%, and how the industry value chain can collaborate in transforming cyberinsurance to align with cybersecurity while building trust, value, & certainty across the ecosystem.

Test-Grey

The Softer Side of Risk: How Knowing People and Culture Can Help You

-- Jason Leuenberger--

Who likes ice cold butter on warm toast?

Yep, no one. Often times our risk management approaches feel just like that. Cold moves on warm people who fear things.

In this talk, we'll look at levels of relationship, systems thinking, and cultural awareness to boost the stickiness of your program.

Risk Insights from another year of data-driven research

--Wade Baker--

We at the Cyentia Institute have the privilege of working with an incredible array of security datasets from many sources. Each of those offer unique perspectives on specific themes. Sometimes we step back & ask "What have we learned about risk from all this data?" We'll share answers in this talk.

Human Nature in our Risk Programs: Work With it, Not Against It

--Zachery Cossairt-- 

Behavioral Economics has taught me we're better off working with human nature than against it. I'd like to explain how understanding insights drawn from this field can improve our ability to communicate risk to other humans and drive positive change when developing and maturing a risk program.

Using Risk Scenarios in risk identification and analysis

--Lisa Young & Tony Martin-Vegue-- 

Scenario analysis has been used in various disciplines such as engineering, medicine, economics, and strategic planning for decades. Risk scenarios allow an organization to evaluate its current state as well as future possibilities. Using scenarios can help you to better understand the risk landscape and is an important tool for improved decision-making. Using risk scenarios as part of your risk identification and analysis process is a powerful tool that actively engages stakeholders in ongoing efforts to identify the situations, conditions, and threats which may have a negative impact on service delivery and continued operations.

Test-Grey

Going Beyond Emphatic Assertion to Assess a Cyber Risk Model's Fitness for Purpose in a Financial Institution

--Mike Jerbic and Dr. Bob Mark--

Congratulations, you're quantifying cyber risk in economic terms! But can you trust those results? Can your stakeholders and regulators? Our approach to vetting cyber risk models goes beyond emphatic assertion to a careful evaluation of models in terms of their accuracy and fitness for purpose.

Using Middle Management to Drive Executive Support for an Information Security Program in an Atypical Organizational Design

--Nicholas Bakewell, CISSP CISA--

"Speak their language" and "align your objectives" may not be effective in certain atypical organizational designs to gain Executive buy-in. Leverage mental models, problem framing, and Newton's first law of motion on middle management in order to create the Executive support you need. Join us for this pre event talk.

"What are we missing?"- An Open-Ended Approach

--David Grimmer-- 

Our Risk Quantifications are having greater success when we leave scope open. We now find assessments taking us to new teams with unexpected risks. Now we measure larger impacts closer to business operations/revenues. This is how we are able to better inform decision makers that write the checks.

The Risky World of Whisk(e)y: The Geopolitics, Economics, and Technology Behind Every Golden Dram

--Melina Rodban--

Milena Rodban, a Senior Advisor to the Director of the National Risk Management Center at the Cybersecurity and Infrastructure Security Agency will discuss the challenge of reducing systemic cyber risk, with an overview of CISA’s Systemic Cyber Risk Reduction Venture. Systemic cyber risk reduction is a shared challenge, and a shared responsibility. The Venture can be used as a model for organizations to map their own critical functions, and create a more secure tomorrow for all of us.