[*]

Ciaran Finnegan is the cybersecurity practice direct at CMD Methods Australia and Phil Massyn is a senior stability guide there. About a yr back they began utilizing Steampipe and its CrowdStrike plugin to scan their customers’ AWS environments.

Now Finnegan and Massyn are making an interior program for what they call “continuous controls assurance.” A different way to say it could be “KPIs as code.” Here’s an illustration of a KPI (important general performance indicator):

Important or significant severity vulnerabilities are remediated inside of the organization’s policy timeframe.

How do you translate that goal into code? With Steampipe, you do it by writing SQL queries that can sign up for throughout the assorted APIs that your program stack exposes. In this situation that suggests querying an endpoint administration system, CrowdStrike, then becoming a member of with info from a workforce administration program, Salesforce—with the being familiar with that either or equally of these could change—to create question final results that map from a vulnerability to a product to a particular person.

Here’s the query.


Select
    ZTA.process_serial_quantity || ' (' || salesforce_krow__job_sources__c.identify || ')' as resource,
    Case
        WHEN ZTA.evaluation ->> 'os' = '100' THEN 'ok'
        ELSE 'alarm'
    End AS status,
    ZTA.method_serial_number || ' (' || salesforce_krow__undertaking_methods__c.title || ' has a rating of ' || (ZTA.assessment ->> 'os') as reason,
    jsonb_path_query_array(ZTA.assessment_merchandise['os_signals'], '$[*] ? (@.fulfills_criteria != "sure").criteria') #>> '' as element
FROM   
    crowdstrike_zta_evaluation ZTA
-- Backlink the serial number to the Salesforce info, so we can uncover the proprietor
-- Remaining Join is significant, in situation there is not a link, we nevertheless want to see the data
Remaining Be a part of salesforce_mounted_asset__c
    ON ZTA.method_serial_variety = serial_number__c
-- Listed here an Internal Sign up for is vital.  If the serial range exists in Krow, but no operator, that could indicate a
-- a details inconsistency in Krow, which will crack the query.  We want an Internal Sign up for, simply because each entries will have to exist
Interior Join salesforce_krow__challenge_resources__c
    ON salesforce_set_asset__c.undertaking_source__c = salesforce_krow__project_assets__c.id

The tables in enjoy are furnished by the CrowdStrike and Salesforce plugins. None of the predefined Salesforce tables would have satisfied the have to have, but that didn’t matter for the reason that CMD Answers have been making use of their possess tailor made Salesforce objects, and mainly because the Salesforce plugin can dynamically acquire custom objects.

You can run the question in any of the approaches Steampipe queries operate: with the Steampipe CLI, with psql (or any Postgres CLI), with Metabase (or any Postgres-suitable BI tool), with Python (or any programming language). Or, as CMD Remedies have accomplished, you can wrap a query in a Steampipe command that types section of a benchmark that runs on the command line with steampipe examine, or as a dashboard with steampipe dashboard.

From queries to controls and benchmarks

Here’s the handle that offers the question. It is just a skinny wrapper that names and defines a KPI.

 
manage "SEC_002" 
    title = "SEC-002 - % of in-scope staff compute products with a Crowdstrike Agent Zero Belief Score for OS of 100"
    sql = <

The control rolls up into a benchmark.

 
benchmark "sec" 
    title = "Security"
    children = [
        ...
        control.SEC_002
        ...
    ]

So you can run SEC_002 individually: steampipe check control.SEC_002. Or you can run all the controls in the benchmark: steampipe check benchmark.sec. Results can flow out in a variety of formats for downstream analysis.

But first, where and how to run steampipe check in a scheduled manner? From their documentation:

steampipe-scheduled-job-runner
Run scheduled Steampipe benchmark checks securely and inexpensively on AWS using ECS Fargate. We use AWS Copilot to define Step Functions and AWS ECS Fargate scheduled jobs to run Steampipe checks in Docker. Steampipe benchmarks and controls are retrieved at run-time from a git respository to support a GitOps workflow

The job runs every night, pulls down queries from a repo, executes those against targets, and exports the outputs to Amazon S3—as Markdown, and as JSON that’s condensed by a custom template.

Checking DMARC configuration

Here's another KPI:

All organizational email domains are configured for DMARC

And here’s the corresponding query, again wrapped in a control.

 
control "INF_001"  A.domain 

The tables here come from the CSV and Net plugins. Like Salesforce, the CSV plugin acquires tables dynamically. In this case the list of domains to check lives in a file called domains.csv retrieved from a domain name system management API. The domain names drive a join with the net_dns_record table to figure out, from MX records, which names are configured for DMARC.

Like all Steampipe controls, these report the required columns resource, status, and reason. It’s purely a convention, as you can write all kinds of queries against plugin-provided tables, but when you follow this convention your queries play in Steampipe’s benchmark and dashboard ecosystem.

Checking for inactive user accounts

It’s true that joining across APIs—with SQL as the common way to reason over them—is Steampipe’s ultimate superpower. But you don’t have to join across APIs. Many useful controls query one or several tables provided by a single plugin.

Here’s one more KPI:

Inactive Okta accounts are reviewed within the organization’s policy time frames

Here’s the corresponding control.

 
control "IAM_001" 

Controls like this express business logic in a clear and readable way, and require only modest SQL skill.

Next steps

As daily snapshots accumulate, Finnegan and Massyn are exploring ways to visualize them and identify trends and key risk indicators (KRIs). A Python script reads the customized steampipe check output and builds JSON and Markdown outputs that flow to S3. They’ve built a prototype Steampipe dashboard to visualize queries, and considering how a visualization tool might help complete the picture.

Why do all this? “There are products on the market we could buy,” Finnegan says, “but they don’t integrate with all our services, and don’t give us the granular mapping from business objectives to SQL statements. That’s the magic of Steampipe for us.”

For more details, see the repos for their Fargate runner and their continuous controls assurance module. If you have a similar story to tell, please get in touch. We’re always eager to know how people are using Steampipe.

Copyright © 2022 IDG Communications, Inc.

Leave a Reply