Governance & Compliance – SMS https://www.sms.com Solving | Managing | Securing Wed, 27 Mar 2024 15:24:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.sms.com/wp-content/uploads/2023/05/cropped-sms-favicon-32x32.png Governance & Compliance – SMS https://www.sms.com 32 32 The Many Faces of Agile Integration https://www.sms.com/blog/the-many-faces-of-agile-integration-2024/ https://www.sms.com/blog/the-many-faces-of-agile-integration-2024/#respond Wed, 27 Mar 2024 15:13:13 +0000 https://www.sms.com/?p=7148 Programs, even simple projects, can seem like exercises in herding cats. If you’re brave enough to manage such an effort, you’ve learned that one of the biggest challenges to progressing is pulling all the pieces together. Requirements for cost, schedule, personnel, and quality are only some of our concerns when we set out to deliver value to our customer. Rather than turn this into some lecture Project Management Institute (PMI) describes in its PMBOK, however, I’d like to discuss an approach many are trying to use to streamline execution.

I wouldn’t blame you if you just cried “Bingo” to your co-workers, having now completed your cliché cover-all card. Nonetheless, humor me. All the buzzwords will be made clear at the outset. And one of those daunting catch phrases I’d like to discuss is a little term PMI calls “agile integration.” It need not be so unnerving.

The term “agile” is a subject of some controversy within government circles. Our superiors elevate those who manage projects with the flexibility that this ideal conveys. We are encouraged to adopt this mindset, amid an unspoken dissuasion of rigidity. It makes sense, if you consider the perspective of our leadership who seem intent on breeding a “can do” attitude among the workforce. After all, who wants to be told that he can’t have that shiny new feature? “Be a little more flexible,” they say, in so many words.

And so, it seems we have adopted the use of agile integration to progress efforts with an eye toward driving flexibility. Inevitably, however there will be compromises. Stakeholder satisfaction, and bottom-line expectations are only some of the concerns that weigh in the balance when we plan these types of efforts.

Still, we have learned to pick our battles. Sometimes the need to flush out the technical details of a solution can simmer on the back burner while we deal with more pressing issues. Let’s say we run into some funding problems. As we shift our focus from pushing for deliverables to negotiating a budget solution, our teams forge ahead. Who knows? Maybe the downtime allows them to dream up a better solution, possibly even cheaper than everyone initially imagined.

At the program level, we might seize the opportunity to use the freed-up labor to crash the schedule on another effort. Then again, we may swap the team out with some other employees we’ve been wanting to send to training. We might allow someone a long-awaited vacation. The expertise we are using may be the kind that lends itself to lulls in execution. However, resources are free. And resources who are free can be utilized in any number of ways.

This approach to the agile objective, however, falls short. Consider the old mantra, “You can have fast, cheap, or good. Pick two.” That is to say, you can crash the schedule for a high-quality product, but it will be costly. You can ration out the limited finances you can afford over time to achieve an upscale result, but it won’t be fast. And of course, you can produce quickly on the cheap, but the finished product will likely not be one of high quality. There are always limitations.

Well into execution, variables such as quality don’t lend themselves to course alterations. The nature of many products, particularly those produced in a linear fashion (e.g., schedule of deployments) do not allow the rework necessary for quality changes. For this reason, quality issues need to be determined, not as a planning matter, but as an entirely separate project, previous or subsequent to any implementation – a design matter.

Consider the alternative. Has the project’s sponsor made any provisions for your team to revisit work completed per the initial quality standards? Do you have the expertise available to correct the issues that require rework? Even if you do, are you familiar enough with the sponsor’s standards to make alterations appropriate to the target environment? I could go on. Suffice to say that rework to address quality needs to be documented and carefully planned before proceeding with any changes.

Given the broader array of resources available at the program level, however, the expertise, the understanding, the relationships, and any other elements that may be required to accomplish quality rework could be available. To determine that, a careful review of these and likely other relevant factors need to be weighed. Perhaps the engineering needed to accomplish the change is minimal. One level up, program managers command a whole different set of resources that could make the rework a consideration.

And so, the question arises: Are we to practice the agile methodology at the program level, but not necessarily the project level? As we’ve found, at the level of the project, overcoming some of these limitations can be achieved with an attitude of flexibility at the outset. “Agile” flexibility, however, lies in the management methodology, not within the project’s planning. Project managers who attempt to channel that flexibility into planning are practicing what PMI calls “rolling wave planning” rather than conducting agile projects. There is a difference.

Agile projects occur in iterations. Many times, those iterations are a series of optimization efforts that enhance an existing product. Efforts like software development are ideal candidates for the agile methodology. A schedule of implementations or a manufacturing process that requires an engineering team to deliver a complete design ready for customer use is difficult to iterate over time. That is not to say that a finished product cannot benefit from a subsequent version. It’s just that, once a product is in the hands of the customer, it is difficult to optimize.

It is plain to see why integrating the agile methodology into the project management process is the daunting task feared by so many. No matter how much pressure we may feel to force agile integration, it is not always the most sensible approach. Given the expectations of our leadership, difficult conversations may be in order. It is always wise to help our customers understand the limitations of progress.

Assuming you and your customers agree to follow through with an agile integration, the effort is best approached with the cooperation of the PMO at the program level. If not promulgated by the PMO, managers of individual projects may find that they do not have the support they need to make the adjustment. Likewise, forcing the change at the level of the portfolio may impose the “one size fits all” requirement discussed above. At the program level, however, projects will have a common flavor that may stand to benefit from the method, and individual PMs will have the opportunity to share the lessons they learn.

There is an array of approaches to the agile practice: scrum, kanban, lean, xp, which may be used as the framework necessary to get started. Leaders can tweak any one of these to suit their efforts. Just remember: while there is nothing wrong with integrating agile tactics, that does not necessarily amount to integrating the agile methodology.

For more information on this and other project management related topics, see pmi.org.

]]>
https://www.sms.com/blog/the-many-faces-of-agile-integration-2024/feed/ 0
Use Azure Automation Runbook to deploy Nessus Agent via Terraform https://www.sms.com/blog/use-azure-automation-runbook-to-deploy-nessus-agent-via-terraform/ https://www.sms.com/blog/use-azure-automation-runbook-to-deploy-nessus-agent-via-terraform/#respond Thu, 02 Nov 2023 15:56:43 +0000 https://www.sms.com/?p=6981 Problem

All Virtual Machines (VMs) in the Azure environment must have Nessus Agent installed and registered to a newly created Nessus Manager without direct SSH or RDP access to any of the VMs.

Solution

Use an existing Azure Automation Account to deploy the Nessus Agent via a runbook. The runbook will add a Virtual Machine extension that will have the necessary steps to install and register the Nessus Agent based on the Operating System. This solution can be used to install pretty much anything on a Windows or Linux Virtual Machine.

What is an Azure Automation Account?

An Azure Automation Account is a cloud-based management service provided by Microsoft, designed to help automate, orchestrate, and manage repetitive tasks and processes within the Azure environment. It serves as a centralized location for storing various automation assets, such as runbooks, credentials, and integration modules, enabling users to streamline their automation efforts and improve operational efficiency.

In this case, an existing Azure Automation Account that was previously created is being used for this effort. If you don’t have an existing one, you can create a new one strictly for this purpose. There are a couple of requirements that are needed to make this work.

  • Associate a User-assigned Managed Identity with at a minimum, the “Virtual Machine Contributor” Azure role to all subscriptions in your tenant.
  • The Azure Automation Account must be linked to the same Log Analytics workspace that your VMs are linked. In this environment this task was previously taken care of to accomplish another effort. To associate VMs to a Log Analytics workspace, you will need the OMS or the MMA agent. See the link below as there are many ways to tackle this.

As mentioned above, if you don’t already have an Automation Account, you will need to create one. Below is an example of creating an Azure Automation Account with Terraform.

resource "azurerm_automation_account" "aa_account" {
location = "<azure region>"
name     = "<name of account>"
resource_group_name = var.rg
identity {
  identity_ids = ["<Your Managed identity ids>"]
  type         = "UserAssigned
}

What is an Azure Automation Runbook?

An Azure Automation Runbook is a set of tasks or operations that you can automate within the Azure environment. It is essentially a collection of PowerShell or Python script(s) that perform various actions, such as managing resources, configuring systems, or handling other operational tasks. Azure Automation Runbooks are commonly used for automating repetitive tasks, scheduling maintenance activities, and orchestrating complex workflows within Azure.

PowerShell 5.x was the scripting language used for this task, in part because Terraform does not currently support Powershell 7.1 as a runbook type. (e.g. https://github.com/hashicorp/terraform-provider-azurerm/issues/14089).

Terraform

Terraform is the current Infrastructure As Code tool for this environment therefore it used in this scenario. Below is a snippet of the main.tf

Let’s take a look at the Terraform code:

resource "azurerm_automation_runbook" "nessus_install" {
  name                    = var.runbook_name
  location                = data.azurerm_resource_group.ops.location
  resource_group_name     = data.azurerm_automation_account.ops.resource_group_name
  automation_account_name = data.azurerm_automation_account.ops.name
  log_verbose             = true
  log_progress            = true
  description             = var.runbook_description
  runbook_type            = var.runbook_type
  tags                    = var.default_tags
  content = templatefile("${path.module}/runbook/nessus.ps1", {
    umi                         = data.azurerm_user_assigned_identity.identity.client_id
    tenantid                    = var.tenant_id
    scriptnamelinux             = var.scritpname_linux
    scriptnamewindows           = var.scritpname_win
    storageaccountcontainer     = data.azurerm_storage_container.sa.name
    storageaccountresourcegroup = data.azurerm_resource_group.sa.name
    storageaccountname          = var.sa_acct
    workbookname                = var.runbook_name
    storageaccountsub           = data.azurerm_subscription.sa.subscription_id
    client_id                   = data.azurerm_user_assigned_identity.identity.client_id
    vms_to_exclude              = join(",", [for vm in local.vms_file_content : "\"${vm}\""])
    defaultsub                  = ""
  })
}

resource "azurerm_automation_job_schedule" "nessus_install" {
  resource_group_name     = data.azurerm_automation_account.ops.resource_group_name
  automation_account_name = data.azurerm_automation_account.ops.name
  schedule_name           = azurerm_automation_schedule.nessus_install.name
  runbook_name            = azurerm_automation_runbook.nessus_install.name

}

resource "azurerm_automation_schedule" "nessus_install" {
  name                    = var.nessus_schedule
  resource_group_name     = data.azurerm_automation_account.ops.resource_group_name
  automation_account_name = data.azurerm_automation_account.ops.name
  frequency               = var.schedule_frequency
  timezone                = var.timezone
  start_time              = var.start_time
  description             = var.schedule_description
  week_days               = var.week_days
  expiry_time             = var.expiry_time
}

azurerm_automation_runbook:  This section defines the Azure Automation Runbook, including its name, location, resource group, and related configurations. The templatefile is using several inputs that allow you to modify your variables and have your script configured with the desired output. The script utilizes a PowerShell script file named `nessus.ps1`, which is responsible for orchestrating the Nessus installation process and covered in the next section.

azurerm_automation_job_schedule: Here, we set up an Azure Automation Job Schedule, which determines the frequency and timing of the execution of the Nessus installation process.

azurerm_automation_schedule: This section specifies the details of the schedule, including the frequency, time zone, start time, and expiry time for the Nessus installation process. This needs to be run on a weekly basis to incorporate any new VMs that get created in any subscription.

If you choose to use the code as-is, the variables used in the templatefile are explained below.

    umi  = User Managed Identity that is associated with the Azure Automation Account
    tenantid                    = The Tenant ID 
    scriptnamelinux             = Name of Linux shell script
    scriptnamewindows           = Name of Windows script
    storageaccountcontainer     = Name of the Storage Account where the scripts reside
    storageaccountresourcegroup = Name of the Resource Group where the Storage Account resides
    storageaccountname          = Name of the Storage Account
    workbookname                = Name of the Runbook you are creating
    storageaccountsub           = The Subscription ID of the Storage Account
    vms_to_exclude              = join(",", [for vm in local.vms_file_content : "\"${vm}\""])
    defaultsub                  = "" # If you want to loop through all active subscriptions leave this as-is, if not put in the subscription you want to this script to run against

vms_to_exclude variable was configured so you can skip VMs by name if you choose. An issue occurred where a VM’s resources were pegged and the script would eventually error out waiting for the VM to finish. So this logic was inserted to mitigate that. A flat txt file “vms.txt” is used for this purpose, you can just list all VMs in this file, one per line.

Powershell

Let’s take a look at the Powershell script that is being called nessus.ps1

Disable-AzContextAutosave -Scope Process

$AzureContext = (Connect-AzAccount -Identity -Environment AzureUSGovernment -AccountId ${umi}).context
$TenantId = '${tenantid}'
$scriptNameLinux = '${scriptnamelinux}'
$scriptNameWindows = '${scriptnamewindows}'
$storageAccountContainer = '${storageaccountcontainer}'
$storageAccountResourceGroup = '${storageaccountresourcegroup}'
$storageAccountName = '${storageaccountname}'
$defaultSubscriptionId = '${defaultsub}'

$settingsLinux = @{
    "fileUris"         = @("https://$storageAccountName.blob.core.usgovcloudapi.net/$storageAccountContainer/$scriptNameLinux")
    "commandToExecute" = "bash $scriptNameLinux"
} | ConvertTo-Json

$settingsWindows = @{
    "fileUris"         = @("https://$storageAccountName.blob.core.usgovcloudapi.net/$storageAccountContainer/$scriptNameWindows")
    "commandToExecute" = "powershell -NonInteractive -ExecutionPolicy Unrestricted -File $scriptNameWindows"
} | ConvertTo-Json

$storageKey = (Get-AzStorageAccountKey -Name $storageAccountName -ResourceGroupName $storageAccountResourceGroup)[0].Value

$protectedSettingsLinux = @{
    "storageAccountName" = $storageAccountName
    "storageAccountKey"  = $storageKey
} | ConvertTo-Json

$protectedSettingsWindows = @{
    "storageAccountName" = $storageAccountName
    "storageAccountKey"  = $storageKey
} | ConvertTo-Json

$currentAZContext = Get-AzContext

if ($currentAZContext.Tenant.id -ne $TenantId) {
    Write-Output "This script is not authenticated to the needed tenant. Running authentication."
    Connect-AzAccount -TenantId $TenantId
}
else {
    Write-Output "This script is already authenticated to the needed tenant - reusing authentication."
}

$subs = @()

if ($defaultSubscriptionId -eq "") {
    $subs = Get-AzSubscription -TenantId $TenantId | Where-Object { $_.State -eq "Enabled" }
}
else {
    if ($defaultSubscriptionId.IndexOf(',') -eq -1) {
        $subs = Get-AzSubscription -TenantId $TenantId -SubscriptionId $defaultSubscriptionId
    }
    else {
        $defaultSubscriptionId = $defaultSubscriptionId -replace '\s', ''
        $subsArray = $defaultSubscriptionId -split ","
        foreach ($subsArrayElement in $subsArray) {
            $currTempSub = Get-AzSubscription -TenantId $TenantId -SubscriptionId $subsArrayElement
            $subs += $currTempSub
        }
    }
}



$excludeVmNamesArray = (${vms_to_exclude})


foreach ($currSub in $subs) {
    Set-AzContext -subscriptionId $currSub.id -Tenant $TenantId

    if (!$?) {
        Write-Output "Error occurred during Set-AzContext. Error message: $( $error[0].Exception.InnerException.Message )"
        Write-Output "Trying to disconnect and reconnect."
        Disconnect-AzAccount
        Connect-AzAccount -TenantId $TenantId -SubscriptionId $currSub.id
        Set-AzContext -subscriptionId $currSub.id -Tenant $TenantId
    }

    $VMs = Get-AzVM

    foreach ($vm in $VMs) {
        if ($excludeVmNamesArray -contains $vm.Name) {
            Write-Output "Skipping VM $($vm.Name) as it is excluded."
            continue
        }

        $status = (Get-AzVM -ResourceGroupName $vm.ResourceGroupName -Name $vm.Name -Status).Statuses[1].DisplayStatus

        if ($status -eq "VM running") {
            Write-Output "Processing running VM $( $vm.Name )"

            $extensions = (Get-AzVM -ResourceGroupName $vm.ResourceGroupName -Name $vm.Name).Extensions

            foreach ($ext in $extensions) {
                if ($null -ne $vm.OSProfile.WindowsConfiguration) {
                    if ($ext.VirtualMachineExtensionType -eq "CustomScriptExtension") {
                        Write-Output "Removing CustomScriptExtension with name $( $ext.Name ) from VM $( $vm.Name )"
                        Remove-AzVMExtension -ResourceGroupName $vm.ResourceGroupName -VMName $vm.Name -Name $ext.Name -Force
                        Write-Output "Removed CustomScriptExtension with name $( $ext.Name ) from VM $( $vm.Name )"
                    }
                }
                else {
                    if ($ext.VirtualMachineExtensionType -eq "CustomScript") {
                        Write-Output "Removing CustomScript extension with name $( $ext.Name ) from VM $( $vm.Name )"
                        Remove-AzVMExtension -ResourceGroupName $vm.ResourceGroupName -VMName $vm.Name -Name $ext.Name -Force
                        Write-Output "Removed CustomScript extension with name $( $ext.Name ) from VM $( $vm.Name )"
                    }
                }
            }

            if ($vm.StorageProfile.OsDisk.OsType -eq "Windows") {
                Write-Output "Windows VM detected: $( $vm.Name )"
                $settingsOS = $settingsWindows
                $protectedSettingsOS = $protectedSettingsWindows
                $publisher = "Microsoft.Compute"
                $extensionType = "CustomScriptExtension"
                $typeHandlerVersion = "1.10"
            }
            elseif ($vm.StorageProfile.OsDisk.OsType -eq "Linux") {
                Write-Output "Linux VM detected: $( $vm.Name )"
                $settingsOS = $settingsLinux
                $protectedSettingsOS = $protectedSettingsLinux
                $publisher = "Microsoft.Azure.Extensions"
                $extensionType = "CustomScript"
                $typeHandlerVersion = "2.1"
            }
            $customScriptExtensionName = "NessusInstall"

            Write-Output "$customScriptExtensionName installation on VM $( $vm.Name )"

            Set-AzVMExtension -ResourceGroupName $vm.ResourceGroupName `
                -Location $vm.Location `
                -VMName $vm.Name `
                -Name $customScriptExtensionName `
                -Publisher $publisher `
                -ExtensionType $extensionType `
                -TypeHandlerVersion $typeHandlerVersion `
                -SettingString $settingsOS `
                -ProtectedSettingString $protectedSettingsOS

            Write-Output "---------------------------"
        }
        else {
            Write-Output "VM $( $vm.Name ) is not running, skipping..."
        }
    }

    Set-AzContext -SubscriptionId $defaultSubscriptionId -Tenant $TenantId
}

This particular environment is in the AzureGOV region but could be modified to use any region.

The script is designed to automate the deployment of custom scripts/extensions to multiple Azure VMs across different subscriptions. It provides flexibility for both Linux and Windows VMs and ensures that any existing custom script extensions are removed before deployment; this is because you cannot have an extension with the same name.

OS Scripts

Now lets look at the windows script that the “nessus.ps1” calls

$installerUrl = "<URL to the msi>"


$NESSUS_GROUP="<Name of your Nessus Group>"

$NESSUS_KEY="<Name of Nessus Key>"

$NESSUS_SERVER="<FQDN of Nessus Server>"

$NESSUS_PORT="<Port if different from standard 8834>"

$installerPath = "C:\TEMP\nessusagent.msi"

$windows_package_name = "'Nessus Agent (x64)'"

$installed = Get-WmiObject -Query "SELECT * FROM Win32_Product WHERE Name = $windows_package_name" | Select-Object Name

function Test-Admin {

    $currentUser = New-Object Security.Principal.WindowsPrincipal $([Security.Principal.WindowsIdentity]::GetCurrent())

    $currentUser.IsInRole([Security.Principal.WindowsBuiltinRole]::Administrator)

}

 

if ((Test-Admin) -eq $false) {

    if ($elevated) {

    }

    else {

        Start-Process powershell.exe -Verb RunAs -ArgumentList ('-noprofile -file "{0}" -elevated' -f ($myinvocation.MyCommand.Definition))

    }

    exit

}

 
'running with full privileges'

if ($installed) {

    Write-Output "Nessus Agent is already installed. Exiting."

}

else {

    Write-Output "Downloading Nessus Agent MSI installer..."

    Invoke-WebRequest -Uri $installerUrl -OutFile $installerPath


    Write-Output "Installing Nessus Agent..."

    Start-Process -FilePath msiexec.exe -ArgumentList '/i C:\TEMP\nessusagent.msi NESSUS_GROUPS="$NESSUS_GROUP" NESSUS_SERVER="$NESSUS_SERVER" NESSUS_KEY='$NESSUS_KEY' /qn' -Wait

    $installed = Get-WmiObject -Query "SELECT * FROM Win32_Product WHERE Name = $windows_package_name" | Select-Object Name


    if ($installed) {

        Write-Output "Nessus Agent has been successfully installed."

    }

    else {

        Write-Output "Failed to install Nessus Agent."

    }

}

 
if (Test-Path $installerPath) {

    Remove-Item -Path $installerPath -Force

}

 
Function Start-ProcessGetStream {


    [CmdLetBinding()]

    Param(

        [System.IO.FileInfo]$FilePath,

        [string[]]$ArgumentList

    )


    $pInfo = New-Object System.Diagnostics.ProcessStartInfo

    $pInfo.FileName = $FilePath

    $pInfo.Arguments = $ArgumentList

    $pInfo.RedirectStandardError = $true

    $pInfo.RedirectStandardOutput = $true

    $pinfo.UseShellExecute = $false

    $pInfo.CreateNoWindow = $true

    $pInfo.WindowStyle = [System.Diagnostics.ProcessWindowStyle]::Hidden


    $proc = New-Object System.Diagnostics.Process

    $proc.StartInfo = $pInfo


    Write-Verbose "Starting $FilePath"

    $proc.Start() | Out-Null

    Write-Verbose "Waiting for $($FilePath.BaseName) to complete"

    $proc.WaitForExit()

    $stdOut = $proc.StandardOutput.ReadToEnd()

    $stdErr = $proc.StandardError.ReadToEnd()

    $exitCode = $proc.ExitCode

 

    Write-Verbose "Standard Output: $stdOut"

    Write-Verbose "Standard Error: $stdErr"

    Write-Verbose "Exit Code: $exitCode"


    [PSCustomObject]@{

        "StdOut"   = $stdOut

        "Stderr"   = $stdErr

        "ExitCode" = $exitCode

    }

}



Function Get-NessusStatsFromStdOut {

 

    Param(

        [string]$stdOut

    )

 

    $stats = @{}

 

 

 

    $StdOut -split "`r`n" | ForEach-Object {

        if ($_ -like "*:*") {

            $result = $_ -split ":"

            $stats.add(($result[0].Trim() -replace "[^A-Za-z0-9]", "_").ToLower(), $result[1].Trim())

        }

    }


    Return $stats

}


Function Get-DateFromEpochSecond {

    Param(

        [int]$seconds

    )

 

    $utcTime = (Get-Date 01.01.1970) + ([System.TimeSpan]::fromseconds($seconds))

    Return Get-Date $utcTime.ToLocalTime() -Format "yyyy-MM-dd HH:mm:ss"

}

 
Try {

    $nessusExe = Join-Path $env:ProgramFiles -ChildPath "Tenable\Nessus Agent\nessuscli.exe" -ErrorAction Continue

}

Catch {

    Throw "Cannot find NessusCli.exe, installing..."

}

 

Write-Output "Getting Agent Status..."

$agentStatus = Start-ProcessGetStreams -FilePath $nessusExe -ArgumentList "agent status"

 

If ($agentStatus.stdOut -eq "" -and $agentStatus.StdErr -eq "") {

    Throw "No Data Returned from NessusCli, linking now"

    Start-ProcessGetStreams -FilePath $nessusExe -ArgumentList 'agent link --key=$NESSUS_KEY --groups="$NESSUS_GROUP" --host=$NESSUS_SERVER --port=$NESSUS_PORT'

}

elseif ($agentStatus.StdOut -eq "" -and $agentStatus.StdErr -ne "") {

    Throw "StdErr: $($agentStatus.StdErr)"

}

elseif (-not($agentStatus.stdOut -like "*Running: *")) {

    Throw "StdOut: $($agentStatus.StdOut)"

}

else {

    $stats = Get-NessusStatsFromStdOut -stdOut $agentStatus.StdOut

    If ($stats.linked_to -eq '$NESSUS_SERVER' -and $stats.link_status -ne 'Not linked to a manager') {

        Write-Output "Connected to $NESSUS_SERVER"

    }

    else {

        Write-Output "Connecting..."

        Start-ProcessGetStreams -FilePath "C:\Program Files\Tenable\Nessus Agent\nessuscli.exe" -ArgumentList 'agent link --key=$NESSUS_KEY --groups="$NESSUS_GROUP" --host=$NESSUS_SERVER --port=$NESSUS_PORT'

    }


    If ($stats.last_connection_attempt -as [int]) { $stats.last_connection_attempt = Get-DateFromEpochSeconds $stats.last_connection_attempt }

    If ($stats.last_connect -as [int]) { $stats.last_connect = Get-DateFromEpochSeconds $stats.last_connect }

    If ($stats.last_scanned -as [int]) { $stats.last_connect = Get-DateFromEpochSeconds $stats.last_scanned }

}

 
#$stats | Out-Host

This script streamlines the process of installing and linking the Nessus Agent to the specified Nessus server, automating various steps and ensuring the seamless deployment and integration of the agent within the intended environment.

Now lets look at the linux script that the “nessus.ps1” calls:

#!/bin/bash

exec 3>&1 4>&2

trap 'exec 2>&4 1>&3' 0 1 2 3

exec 1>/tmp/nessus-install-log.out 2>&1

 

PACKAGE_NAME="nessusagent"

ACTIVATION_CODE="<Your Nessus Activation Key/Code>"

NESSUS_HOST="<fqdn of your Nessus Manager>"

NESSUS_AGENT="/opt/nessus_agent/sbin/nessuscli"

NESSUS_PORT="<port # if different from 8834>"

NESSUS_GROUP="<name of your group>"

base_url="<url to your Storage Account>"

debian_filename="NessusAgent-10.3.1-ubuntu1404_amd64.deb" # 

redhat_7_filename="NessusAgent-10.3.1-es7.x86_64.rpm" # Redhat EL7 filename

redhat_8_filename="NessusAgent-10.3.1-es8.x86_64.rpm" # Redhat EL8 filename

 

 

if_register_agent() {

  if  "$NESSUS_AGENT" agent status | grep -q "Linked to: $NESSUS_HOST"; then

    echo "Nessus Agent is already linked to Nessus Manager."

  else

    $NESSUS_AGENT agent link --host="$NESSUS_HOST" --port="$NESSUS_PORT" --key="$ACTIVATION_CODE" --groups="$NESSUS_GROUP"

    if [ $? -eq 0 ]; then

        echo "Nessus Agent linked successfully."

        else

          echo "Failed to link Nessus Agent. Check your activation code or permissions."

          exit 1

    fi

  fi

}

 

is_package_installed_debian() {

  if  dpkg -l | grep -i "ii  $PACKAGE_NAME"; then

    if_register_agent

    return 0

  else

    return 1

  fi

}

 

is_package_installed_redhat() {

  if  rpm -qa | grep -i "$PACKAGE_NAME" > /dev/null; then

    if_register_agent

    return 0

  else

    return 1

  fi

}

 

install_package_debian() {

  echo "$PACKAGE_NAME is not installed on $ID. Installing it now..." &&

  sleep 20 &&

   wget -qP /tmp $base_url$debian_filename &&

  sleep 20 &&

   dpkg -i /tmp/"$debian_filename" &&

  sleep 20 &&

   $NESSUS_AGENT agent link --host="$NESSUS_HOST" --port="$NESSUS_PORT" --key="$ACTIVATION_CODE" --groups="$NESSUS_GROUP" &&

  sleep 20 &&

   systemctl enable nessusagent --now &&

  sleep 20 &&

   $NESSUS_AGENT agent status |  tee /tmp/nessus_agent_status &&

   sleep 20 &&

   rm -f /tmp/"$debian_filename"

   exit

}

 

 

install_package_redhat_v7() {

  echo "$PACKAGE_NAME is not installed on $ID-$VERSION_ID Installing it now..."

  yum -y install wget &&

  sleep 20 &&

   wget -qP /tmp $base_url$redhat_7_filename &&

  sleep 20 &&

   rpm -ivh /tmp/"$redhat_7_filename" &&

  sleep 20 &&

   $NESSUS_AGENT agent link --host="$NESSUS_HOST" --port="$NESSUS_PORT" --key="$ACTIVATION_CODE" --groups="$NESSUS_GROUP" &&

  sleep 20 &&

   systemctl enable nessusagent --now &&

  sleep 20 &&

   $NESSUS_AGENT agent status |  tee /tmp/nessus_agent_status &&

   rm -f /tmp/"$redhat_7_filename"

   exit

}

 

install_package_redhat_v8() {

  echo "$PACKAGE_NAME is not installed on $ID-$VERSION_ID. Installing it now..."

  sleep 20 &&

   wget -qP /tmp $base_url$redhat_8_filename &&

  sleep 20 &&

   rpm -ivh /tmp/"$redhat_8_filename" &&

  sleep 20 &&

   $NESSUS_AGENT agent link --host="$NESSUS_HOST" --port="$NESSUS_PORT" --key="$ACTIVATION_CODE" --groups="$NESSUS_GROUP" &&

  sleep 20 &&

   systemctl enable nessusagent --now &&

  sleep 20 &&

   $NESSUS_AGENT agent status |  tee /tmp/nessus_agent_status &&

   rm -f /tmp/"$redhat_8_filename"

   exit

}

 

check_debian_based() {

  lowercase_id=$(echo "$ID" | tr '[:upper:]' '[:lower:]')

  if [[ "$lowercase_id" == *debian* || "$lowercase_id" == *ubuntu* ]]; then

    if is_package_installed_debian; then

      echo "$PACKAGE_NAME is already installed on $ID."

      exit 0

    else

      install_package_debian

    fi

  fi

}

 

check_redhat_based() {

  lowercase_id=$(echo "$ID" | tr '[:upper:]' '[:lower:]')

  if [[ "$lowercase_id" == *centos* || "$lowercase_id" == *rhel* || "$lowercase_id" == *ol* || "$lowercase_id" == *el* ]]; then

    if is_package_installed_redhat; then

      echo "$PACKAGE_NAME is already installed on $ID."

      exit 0

    else

      if [[ "$VERSION_ID" == 7 ]]; then

        echo "Red Hat $ID version 7 detected."

        install_package_redhat_v7

      elif [[ "$VERSION_ID" == 8 ]]; then

        echo "Red Hat $ID version 8 detected."

        install_package_redhat_v8

      else

        echo "Unsupported version: $VERSION_ID"

        exit 1

      fi

    fi

  fi

}

 

if [ -f /etc/os-release ]; then

  . /etc/os-release

  check_debian_based

  check_redhat_based

else

  echo "Unsupported Linux distribution."

  exit 1

fi

This script is pretty much the same as the one above except it is for linux distributions. It will determine the OS type and install the necessary agent package and register the agent to the appropriate Nessus Manager.

Example of the variables.tf

variable "default_tags" {
  description = "A map of tags to add to all resources"
  type        = map(string)
  default = {
  }
}

variable "tenant_id" {
  description = "Azure AD Tenate ID of the Azure subscription"
  type        = string
}

variable "nessus_schedule" {
  description = "Name of the Schedule in Automation Account"
  type        = string
  default     = "nessus-automation-schedule"
}

variable "timezone" {
  description = "Name of the Timezone"
  type        = string
  default     = "America/New_York"
}

variable "schedule_description" {
  description = "Schedule Description"
  type        = string
  default     = "This is schedule to download and install Nessus"
}

variable "week_days" {
  description = "Schedule Description"
  type        = list(string)
  default     = ["Monday", "Wednesday", "Saturday"]
}

variable "scritpname_linux" {
  default     = "nessus-linux.sh"
  description = "Name of Linux script"
  type        = string
}

variable "scritpname_win" {
  default     = "nessus-windows.ps1"
  description = "Name of Windows script"
  type        = string
}

variable "sa_container" {
  description = "Name of the Storage Account Container"
  type        = string
}

variable "sa_rg" {
  description = "Name of the Storage Account Resource Group"
  type        = string
}

variable "sa_sub" {
  description = "Subscription ID where the Storage Account lives"
  type        = string
}


variable "sa_acct" {
  description = "Name of the Storage Account"
  type        = string
}

locals {
  vms_file_content = split("\n", file("${path.module}/vms.txt"))
}

variable "schedule_frequency" {
  description = "Job frequency"
  type        = string
  default     = "Week"
}

variable "runbook_name" {
  description = "Name of the runbook"
  type        = string
  default     = "nessus_agent_install"
}

variable "runbook_type" {
  description = "Name of the language used"
  type        = string
  default     = "PowerShell"
}

variable "runbook_description" {
  description = "Description of the Runbook"
  type        = string
  default     = "This runbook will Download and Install the Nessus Agent"
}

variable "start_time" {
  description = "When to start the runbook schedule"
  type        = string
  default     = "2024-10-07T06:00:15+02:00"
}

variable "expiry_time" {
  description = "When to start the runbook schedule"
  type        = string
  default     = "2027-10-07T06:00:15+02:00"
}

variable "identity_sub" {
  description = "Subscription where MI lives"
  type        = string
}


All the above code can be found at the link below.

https://github.com/rdeberry-sms/nessus_aa_runboook
]]>
https://www.sms.com/blog/use-azure-automation-runbook-to-deploy-nessus-agent-via-terraform/feed/ 0
Automating Operating System Hardening https://www.sms.com/blog/automating-operating-system-hardening/ https://www.sms.com/blog/automating-operating-system-hardening/#respond Wed, 12 Jul 2023 17:01:42 +0000 https://smsprod01.wpengine.com/?p=6583 By Andrew Stanley, Director of Engineering, SMS

In the ever-evolving landscape of cybersecurity, the importance of operating system hardening cannot be overstated. As the foundational layer of any IT infrastructure, the operating system presents a broad surface area for potential attacks. Hardening these systems, therefore, is a critical step in any comprehensive cybersecurity strategy. However, the challenge lies in automating this process, particularly in legacy on-premises infrastructures not designed with automation in mind. 

Open-source software has emerged as a powerful ally in this endeavor, offering flexibility, transparency, and a collaborative approach to tackling cybersecurity challenges. Tools such as OpenSCAP and Ansible have been instrumental in automating and streamlining the process of operating system hardening. The Center for Internet Security (CIS), a non-profit entity, plays a pivotal role in this context by providing well-defined, community-driven security benchmarks that these tools can leverage. 

While cloud-native architectures have been at the forefront of automation with tools like HashiCorp’s Packer and Terraform, these tools are not confined to the cloud. They can be ingeniously adapted to work with on-premises systems like VMware, enabling the creation of hardened virtual machine images and templates. This convergence of cloud-native tools with traditional on-premises systems is paving the way for a new era in cybersecurity, where robust, automated defenses are within reach for all types of IT infrastructures. This blog post will delve into how these tools can automate operating system hardening, making cybersecurity more accessible and manageable. 

Why Use OpenSCAP and Ansible for Operating System Hardening

The Center for Internet Security (CIS) Benchmarks Level II Server Hardening standard is a stringent set of rules designed for high-security environments. It includes advanced security controls like disabling unnecessary services, enforcing password complexity rules, setting strict access controls, and implementing advanced auditing policies. OpenSCAP, an open-source tool, can automate the application of these benchmarks by generating Ansible templates. This automation ensures consistency, accuracy, and efficiency in securing your servers according to these high-level standards.

Prerequisites

  • VMware vSphere environment for building and testing images
  • One Linux host or VM to run the required tools
  • One Linux host or VM for auditing

Note

The examples in this post use Ubuntu 20.04 but should work for other versions and distros.

Steps

  • Execute the following on the host you intend to use for running OpenScap, Ansible, Packer and Terraform.
# Reference - https://medium.com/rahasak/automate-stig-compliance-server-hardening-with-openscap-and-ansible-85f2f091b00
# install openscap libraries on local and remote hosts
sudo apt install libopenscap8

# Create a working directory
mkdir ~/openscap
export WORKDIR=~/openscap
cd $WORKDIR

# Download ssg packages and unzip
# Check for updates here - https://github.com/ComplianceAsCode/content/releases
wget https://github.com/ComplianceAsCode/content/releases/download/v0.1.67/scap-security-guide-0.1.67.zip
unzip -q scap-security-guide-0.1.67.zip

# Clone openscap
git clone https://github.com/OpenSCAP/openscap.git
  • Create a new Ubuntu 20.04 base image and virtual machine template in VMware

Note

There are several ways to create base images in vSphere. Our recommendation is to use Hashicorp Packer and the packer-examples-for-vsphere project. The setup and configuration of these is outside the scope of this post but we may cover it in more detail in the future. The advantage of using this project is that it already provides a convenient way to add ansible playbooks to your image provisioning process. Additionally, SMS develops reusable terraform modules that are designed to work with images created from this project.

  • Run a remote scan against the new virtual machine you created
# Return to the root of the working directory
cd $WORKDIR

# Scan the newly created Ubuntu 20.04 instance using the CIS Level2 Server profile
./openscap/utils/oscap-ssh --sudo <user@host> 22 xccdf eval \
  --profile xccdf_org.ssgproject.content_profile_cis_level2_server \
  --results-arf ubuntu2004-cis_level2_server.xml \
  --report ubuntu2004-cis_level2_server.html \
  scap-security-guide-0.1.67/ssg-ubuntu2004-ds.xml
  • Generate an Ansible Remediation Playbook
# Generate an Ansible Playbook using OpenSCAP
oscap xccdf generate fix \
  --fetch-remote-resources \
  --fix-type ansible \
  --result-id "" \
  ubuntu2004-cis_level2_server.xml > ubuntu2004-playbook-cis_level2_server.yml
  • Test the generated Ansible Playbook
# Validate the playbook against the target machine
ansible-playbook -i "<host>," -u <user> -b -K ubuntu2004-playbook-cis_level2_server.yml

Note

It may be necessary to perform the previous scanning and playbook creation steps multiple times. As new packages are added additional hardening configurations will be needed.

Using Ansible Templates with Packer Examples for VMware vSphere

In this section, we delve into the practical application of Packer in a vSphere environment. We will explore the Packer Examples for VMware vSphere repository on GitHub, which provides a comprehensive set of examples for using Packer with vSphere. These examples demonstrate how to automate the creation of vSphere VM templates using Packer, Ansible and Terraform which can be used to create consistent and repeatable infrastructure. By the end of this section, you will have a solid understanding of how to leverage these examples in a vSphere environment to streamline your infrastructure management tasks. 

# Return to the root of the working directory
cd $WORKDIR

# Clone packer-examples-for-vsphere
git clone https://github.com/vmware-samples/packer-examples-for-vsphere.git
cd ./packer-examples-for-vsphere

# Create a new branch to save customizations. New templates will include the branch name by default.
git checkout -b dev
  • Update the repo to include the Ansible Playbook created with OpenSCAP
# Add a new role to the Ansible section of the repo
mkdir -p ./ansible/roles/harden/tasks
mkdir -p ./ansible/roles/harden/vars

# Create a variables file for the new role and copy all of the variables from the Ansible Playbook
vi ./ansible/roles/harden/vars/main.yml

# Create a task file and copy the remaining contents of the Ansible Playbook
vi ./ansible/roles/harden/tasks/main.yml

# Update the the existing Ansible Playbook to include the newly created role
vi ./ansible/main.yml

---
- become: "yes"
  become_method: sudo
  debugger: never
  gather_facts: "yes"
  hosts: all
  roles:
    - base
    - users
    - configure
    - harden
    - clean
  • Create a new hardened image and virtual machine template in VMware
# Follow the setup instructions in the README.md then create your base images
./build.sh

    ____             __                ____        _ __    __     
   / __ \____ ______/ /_____  _____   / __ )__  __(_) /___/ /____ 
  / /_/ / __  / ___/ //_/ _ \/ ___/  / __  / / / / / / __  / ___/ 
 / ____/ /_/ / /__/ ,< /  __/ /     / /_/ / /_/ / / / /_/ (__  )  
/_/    \__,_/\___/_/|_|\___/_/     /_____/\__,_/_/_/\__,_/____/   

  Select a HashiCorp Packer build for VMware vSphere:

      Linux Distribution:

         1  -  VMware Photon OS 4
         2  -  Debian 11
         3  -  Ubuntu Server 22.04 LTS (cloud-init)
         4  -  Ubuntu Server 20.04 LTS (cloud-init)

Choose Option 4

Creating Virtual Machines on VMware vSphere Using the Hardened Virtual Machine Templates

In this section, we will explore using the ‘terraform-vsphere-instance’ project, hosted on GitLab by SMS, for creating virtual machines. This project provides a set of Terraform configurations designed to create instances on VMware vSphere. These configurations leverage the power of Terraform, a popular Infrastructure as Code (IaC) tool, to automate the provisioning and management of vSphere instances. By using these Terraform modules, you can streamline the process of creating and managing your virtual machines on vSphere, ensuring consistency and repeatability in your infrastructure.

  • Create a virtual machine instance from the new template
# Return to the root of the working directory
cd $WORKDIR

# Clone terraform-vsphere-instance
git clone https://gitlab.com/sms-pub/terraform-vsphere-instance.git
cd ./terraform-vsphere-instance/examples/vsphere-virtual-machine/template-linux-cloud-init

# Copy and update the example tfvars file with settings for your environment
cp terraform.tfvars.example test.auto.tfvars

# Deploy a new virtual machine using Terraform
terraform plan

...
Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + module_output = [
      + {
          + vm_id           = (known after apply)
          + vm_ip_address   = (known after apply)
          + vm_ip_addresses = (known after apply)
          + vm_moid         = (known after apply)
          + vm_tools_status = (known after apply)
          + vm_vmx_path     = (known after apply)
        },
    ]

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.

terraform apply

...
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

module_output = [
  {
    "vm_id" = "423d5014-829b-e000-9489-ac12dfaf4627"
    "vm_ip_address" = "10.4.3.142"
    "vm_ip_addresses" = tolist([
      "10.4.3.142",
      "fe80::250:56ff:febd:394f",
    ])
    "vm_moid" = "vm-4174"
    "vm_tools_status" = "guestToolsRunning"
    "vm_vmx_path" = "f784ad64-86a2-588d-a073-0025b500002e/lin-test-2004-default-00.vmx"
  },
]

Conclusion

In this blog post, we’ve explored the importance of operating system hardening and the challenges of automating this process, particularly in legacy on-premises infrastructures. We’ve seen how open-source tools like OpenSCAP and Ansible, along with the CIS Benchmarks, provide a robust framework for maintaining the security of enterprise systems. 

We’ve also delved into the practical application of Packer in a vSphere environment, demonstrating how to automate the creation of vSphere VM templates. Furthermore, we’ve seen how these templates can be used to create consistent and repeatable infrastructure, ensuring a high level of security across all systems. 

Finally, we’ve explored the use of Terraform modules from GitLab for creating virtual machines on VMware vSphere. This approach leverages the power of Infrastructure as Code (IaC) to automate the provisioning and management of vSphere instances, streamlining the process and ensuring consistency and repeatability in your infrastructure. 

In conclusion, the convergence of cloud-native tools with traditional on-premises systems is paving the way for a new era in cybersecurity. By leveraging these tools, organizations can ensure that their systems are configured according to best security practices and are resilient against potential threats. This approach makes cybersecurity more accessible and manageable, even in complex, legacy infrastructures. 

As we move forward, it’s clear that the automation of operating system hardening will continue to play a crucial role in cybersecurity. By staying informed and leveraging the right tools, we can ensure that our systems remain secure in the face of ever-evolving threats.

References

]]>
https://www.sms.com/blog/automating-operating-system-hardening/feed/ 0
Ordering Fraud: The Other Side of the Supply Chain https://www.sms.com/blog/ordering-fraud-side-supply-chain/ Fri, 15 May 2020 13:44:54 +0000 http://sms-old.local/?p=2456 By Ben Friedman, Vice President, Strategic Sourcing, SMS

Overview
Discussions of supply chain management are everywhere these days. Our interconnected world increasingly relies on the outputs of a multitude of companies located all over the world. Nothing makes this clearer than the shortages of medical supplies, basic commodities, and even food due to the COVID-19 pandemic. We have all seen first-hand how complex supply chains can both provide value and bring substantial risk, with sometimes cheaper products but a higher degree of supply chain disruption in a time of crisis. Lost in these discussions is the other side of the supply chain: the customer, the organization, or individual buying your product. This is particularly important in the IT industry.

Selling to the Wrong Customer
Monitoring the sell-side of your transactions is every bit as important as monitoring the buy-side of your transactions. Increasingly sophisticated fraud in the IT space has now begun to impact public institutions including hospitals, universities, and even government agencies. Criminals pose as legitimate buyers (sometimes using actual employee names easily found on Linkedin) from large institutions and request pricing for high-demand IT components. Once you have engaged with a fraudulent buyer you may actually be directed to the accounting department of the actual organization (if a commercial client) to establish credit terms, or you may be given a stolen credit card as payment. The scam has to happen quickly for the criminal to succeed so they almost always will want to buy off-the-shelf products. This fraud can have the following impacts to your company:

  • Theft. Much of this fraud actually results in outright theft. And you the seller are left with an unpaid invoice or reversed charges on a fraudulent credit card charge.
  • Undermining of traditional markets and encouragement of the Gray and Black Markets. Every fraudulent sale made means that there is more stolen material to be had in the marketplace, negatively impacting the sale of legitimate products.
  • Damage to reputation. Any investigation of an incident ultimately will involve your suppliers and the customer which the criminal impersonated. This is not the kind of press your company needs.
  • Financial support of criminal or antisocial enterprises. Groups participating in this type of fraud are more sophisticated and organized than you might think. They can even be tied to organized crime, and terrorist organizations.

What Should You Look for to Prevent Selling to Fraudulent Customers?

  • Requests for commercially resalable items (for example: iPhones, hard drives, inexpensive routers, and toner cartridges)
  • Requests from an email address from a webmail account that does not match the company name
  • Requests from companies or individuals with no past business history with your organization that imply a sense of urgency
  • Request that contain no quantities
  • Requests from customers that do not know your name
  • Requests that immediately ask for “net terms”
  • Requests inquiring if you would ship overseas
  • Requests from a company that already resells the items they are asking to buy. As odd as this seems it does happen.
  • Requests where there is no contact phone number
  • Post order changes to terms, credit card information, or shipping location
  • Your accounting department receives requests for credit checks from companies with no known business relationship. In this case someone may be trying to impersonate your company.

Example of a Fraudulent Request
The case below is a real example of an attempt to defraud. At first glance the email and attachment seem legitimate, but upon closer inspection it becomes clear this is fraud.

email example redacted 002
RFQ example Redacted 002

Questions You Should Be Asking About This Request

  • Do you know this customer? How do they know you?
  • What is the domain of the email address? I have redacted the email address, but if you are suspicious of the domain you should check your organization’s security policy on handling suspected phishing attempts.
  • The request is for items that have consumer value and high resale value. Does that seem typical?
  • The request is asking for “unlocked phones”. Would the Government ever request this? Maybe, but it is a warning sign nevertheless.
  • Would the Chief Acquisition Officer really be putting out an RFQ for 55 cell phones? Very unlikely. A request like this would come from an acquisition specialist or a contracting officer.
  • Does the RFP look like a regular government RFP? Is this how you normally receive RFPs from this customer?
  • Does the logo look right? If you look closely you can see that it was clearly cut and pasted from the agency website and perhaps resized.

SMS and Supply Chain Management
SMS takes supply chain management seriously and recently became one of fewer than 3 dozen companies worldwide certified under the ISO/IEC 20243:2018 /Open Trusted Technology Provider Standard (O-TTPS). This standard is a set of industry best practices designed to reduce the risk of acquiring tainted or counterfeit IT products. SMS took the additional step of implementing training, procedures and systems designed to prevent fraudulent customer sales to do our part in ensuring that tainted products do not originate from SMS. Ultimately ensuring that the supply chain is secure on both the buy-side and the sell-side is critical in reducing the amount of overall IT fraud.

]]>