Quantcast
Channel: Fortinet GURU
Viewing all 2380 articles
Browse latest View live

FortiSIEM Installing a Collector on Bare Metal Hardware

$
0
0
Installing a Collector on Bare Metal Hardware

You can install Collectors on bare metal hardware (that is, without a hypervisor layer). Be sure to read the section on Hardware Requirements for Collectors in Browser Support and Hardware Requirements before starting the installation process.

  1. Download the Linux collector ISO image from https://images.FortiSIEM.net/VMs/releases/CO/.
  2. Burn the ISO to a DVD so that you can boot from it to begin the setup.
  3. Before you begin the installation, make sure the host where you want to install the Collector has an Internet connection.
  4. Log into the server where you want to install the Collector as root and make sure your boot DVD is loaded.
  5. Go to /etc/yum.repos.d and make sure these configuration files are in the directory:

CentOS-Base.repo

CentOS-Debuginfo.repo

CentOS-Media.repo

CentOS-Vault.repo

  1. The system will reboot itself when installation completes.
  2. Follow the instructions in Registering the Collector to the Supervisor to complete the Collector set up.

FortiSIEM General Installation

$
0
0

General Installation

Configuring Worker Settings

If you are using an FortiSIEM clustered deployment that includes both Workers and Collectors, you must define the Address of your Worker nodes before you register any Collectors. When you register your Collectors, the Worker information will be retrieved and saved locally to the Collector. The Collector will then upload event and configuration change information to the Worker.

Worker Address in a Non-Clustered Environment

If you are not using an FortiSIEM clustered deployment, you will not have any Worker nodes. In that case, enter the IP address of the Supervisor for the Worker Address, and your Collectors will upload their information directly to the Supervisor.

  1. Log in to your Supervisor node.
  2. Go to Admin > General Settings > System.
  3. For Worker Address, enter a comma-separated list of IP addresses or host names for the Workers.

The Collector will attempt to upload information to the the listed Workers, starting with the first Worker address and proceeding until it finds an available Worker.

 

Registering the Supervisor
  1. In a Web browser, navigate to the Supervisor’s IP address: https://<Supervisor IP> 2. Enter the login credentials associated with your FortiSIEM license, and then click Register.
  2. When the System is ready message appears, click the Here link to log in to FortiSIEM.
  3. Enter the default login credentials.
User ID admin
Password admin*1
Cust/Org ID super
  1. Go to Admin > Cloud Health and check that the Supervisor Health is Normal.
Registering the Worker
  1. Go to Admin > License Management > VA Information.
  2. Click Add, enter the new Worker’s IP address, and then click OK.
  3. When the new Worker is successfully added, click OK.

You will see the new Worker in the list of Virtual Appliances.

  1. Go to Admin > Cloud Health and check that the Worker Health is Normal.
Registering the Collector to the Supervisor

The process for registering a Collector node with your Supervisor node depends on whether you are setting up the Collector as part of an enterprise or multi-tenant deployment. For a multi-tenant deployment,you must first create an organization and add Collectors to it before you register it with the Supervisor. For an enterprise deployment, you install the Collector within your IT infrastructure and then register it with the Supervisor.

Create an Organization and Associate Collectors with it for Multi-Tenant Deployments

Register the Collector with the Supervisor for Enterprise Deployments

Create an Organization and Associate Collectors with it for Multi-Tenant Deployments
  1. Log in to the Supervisor.
  2. Go to Admin > Setup Wizard > Organizations.
  3. Click Add.
  4. Enter Organization Name, Admin User, Admin Password, and Admin Email.
  5. Under Collectors, click New.
  6. Enter the Collector Name, Guaranteed EPS, Start Time, and End Time.
  7. Click Save.

The newly added organization and Collector should be listed on the Organizations tab.

  1. In a Web browser, navigate to https://<Collector-IP>:5480.
  2. Enter the Collector setup information.
Name Collector Name
User ID Organization Admin User
Password Organization Admin Password
Cust/Org ID Organization Name
Cloud URL Supervisor URL

 

  1. Click

The Collector will restart automatically after registration succeeds.

  1. In the Supervisor interface, go to Admin > Collector Health and check that the Collector Health is Normal.
Register the Collector with the Supervisor for Enterprise Deployments
  1. Log in to the Supervisor.
  2. Go to Admin > License Management. and check that Collectors are allowed by the license.
  3. Go to Setup Wizard > General Settings and add at least the Supervisor’s IP address.

This should contain a list of the Supervisor and Worker accessible IP addresses or FQDNs.

  1. Go to Setup Wizard > Event Collector and add the Collector information.
Setting Description
Name Will be used in step 6
Guaranteed EPS This is the number of Events per Second (EPS) that this Collector will be provisioned for
Start Time Select Unlimited
End Time Select Unlimited
  1. Connect to the Collector at https://:<IP Address of the Collector>:5480.
  2. Enter the Name from step 4.
  3. Userid and Password are the same as the admin userid/password for the Supervisor.
  4. The IP address is the IP address of the Supervisor.
  5. For Organization, enter Super.
  6. The Collector will reboot during the registration, and you will be able to see its status on the Collector Health page.

Using NFS Storage with AccelOps

$
0
0

Using NFS Storage with AccelOps

When you install FortiSIEM, you have the option to use either local storage or NFS storage. For cluster deployments using Workers, the use of an NFS Server is required for the Supervisor and Workers to communicate with each other. These topics describe how to set up and configure NFS servers for use with FortiSIEM.

Configuring NFS Storage for VMware ESX Server

This topic describes the steps for installing an NFS server on CentOS Linux 6.x and higher for use with VMware ESX Server. If you are using an operating system other than CentOS Linux, follow your typical procedure for NFS server set up and configuration.

  1. Login to CentOS 6.x as root.
  2. Create a new directory in the large volume to share with the FortiSIEM Supervisor and Worker nodes, and change the access permissions to provide FortiSIEM with access to the directory.
  3. Check shared directories.

Related Links

Setting Up NFS Storage in AWS

 

Using NFS Storage with Amazon Web Services

Setting Up NFS Storage in AWS

Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS

Setting Up NFS Storage in AWS

Youtube Talk on NFS Architecture for AWS

Several architecture and partner options for setting up NFS storage that is highly available across availability zone failures are presented by an AWS Solutions Architect in this talk (40 min) and link to slides.

Using EBS Volumes

These instructions cover setting up EBS volumes for NFS storage. EBS volumes have a durability guarantee that is 10 times higher tha n traditional disk drives. This is because data in traditional disk drives is replicated within an availability zone for component failures (RAID equivalent), so adding another layer of RAID does not provide higher durability guarantees. EBS has an annual failure rate (AFR) of 0.1 to 0.5%. In order to have higher durability guarantees, it is necessary to take periodic snapshots of the volumes. Snapshots are stored in AWS S3, which has 99.999999999% durability (via synchronous replication of data across multiple data centers) and 99.99% availability. see the topic Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS for more information.

Using EC2 Reserved Instances for Production

If you are running these machines in production, it is significantly cheaper to use EC2 Reserved Instances (1 or 3 year) as opposed to on-demand instances.

  1. Log in to your AWS account and navigate to the EC2 dashboard.
  2. Click Launch Instance.
  3. Review these configuration options:
Network and

Subnet

Select the VPC you set up for your instance.
Public IP Clear the option Automatically assign a public IP address to your instances if you want to use VPN.
Placement

Group

A placement group is a logical grouping for your cluster instances. Placement groups have low latency, full-bisection 10Gbps bandwidth between instances. Select an existing group or create a new one.
Shutdown

Behavior

Make sure Stop is selected.
Enable

Termination

Protection

Make sure Protect Against Accidental Termination is selected.
EBS

Optimized

Instance

An EBS optimized instance enables dedicated throughput between Amazon EBS and Amazon EC2, providing improved performance for your EBS volumes. Note that if you select this option, additional Amazon charges may apply.
  1. Click Next: Add Storage.
  2. Add EBS volumes up to the capacity you need for EventDB storage.

EventDB Storage Calculation Example

At 5000 EPS, you can calculate daily storage requirements to amount to roughly 22-30GB (300k events are 15-20MB on

average in compressed format stored in EventDB). In order to have 6 months of data available for querying, you need to have 4-6TB of storage. On AWS, the maximum EBS volume is sized at 1TB. In order to have larger disks, you need to create software RAID-0 volumes. You can attach, at most 8 volumes to an instance, which results in 8TB with RAID-0. There’s no advantage in using a different RAID configuration other than RAID-0, because it does not increase durability guarantees. In order to ensure much better durability guarantees, plan on performing regular snapshots which store the data in S3 as described in Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS. Since RAID-0 stripes data across these volumes, the aggregate IOPS you get will be the sum of the IOPS on individual volumes.

  1. Click Next: Tag Instance.
  2. Under Value, enter the Name you want to assign to all the instances you will launch, and then click Create Tag.

After you complete the launch process, you will have to rename each instance to correspond to its role in your configuration, such as Supervisor, Worker1, Worker2.

  1. Click Next: Configure Security Group.
  2. Select Select an Existing Security Group, and then select the default security group for your VPC.

FortiSIEM needs access to HTTPS over port 443 for GUI and API access,  and access to SSH over port 22 for remote management, which are set in the default security group. This group will allow traffic between all instances within the VPC.

  1. Click Review and Launch.
  2. Review all your instance configuration information, and then click Launch.
  3. Select an existing or create a new Key Pair to connect to these instances via SSH.

If you use an existing key pair, make sure you have access to it. If you are creating a new key pair, download the private key and store it in a secure location accessible from the machine from where you usually connect to these AWS instances.

  1. Click Launch Instances.
  2. When the EC2 Dashboard reloads, check that all your instances are up and running.
  3. Select the NFS server instance and click Connect.
  4. Follow the instructions to SSH into the volumes as described in Configuring the Supervisor and Worker Nodes in AWS Configure the NFS mount point access to give the FortiSIEM internal IP full access.
# Update the OS and libraries with the latest patches

$ sudo yum update -y

 

$ sudo yum install -y nfs-utils nfs-utils-lib lvm2

$ sudo su –

# echo  Y | mdadm –verbose –create /dev/md0 –level=0–chunk=256–

# mdadm –detail –scan >  /etc/mdadm.conf

# cat /etc/mdadm.conf

# dd if=/dev/zero of=/dev/md0 bs=512count=1

# pvcreate /dev/md0

# vgcreate VolGroupData /dev/md0

# lvcreate -l 100%vg -n LogVolDataMd0 VolGroupData

# mkfs.ext4 -j /dev/VolGroupData/LogVolDataMd0

# echo “/dev/VolGroupData/LogVolDataMd0 /data       ext4    defaults        1 1”

# mkdir /data

# mount /data

# df -kh

# vi /etc/exports

/data   10.0.0.0/24(rw,no_root_squash)

# exportfs -ar

# chkconfig –levels 2345nfs on

# chkconfig –levels 2345rpcbind on

# service rpcbind start

Starting rpcbind:                                          [  OK  ]

# service nfs start

Starting NFS services:                                     [  OK  ]

Starting NFS mountd:                                       [  OK  ]

Stopping RPC idmapd:                                       [  OK  ]

Starting RPC idmapd:                                       [  OK  ]

Starting NFS daemon:                                       [  OK  ]

raid-devices=4/dev/sdf /dev/sdg /dev/sd

>> /etc/fstab

Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS

In order to have high durability guarantees for FortiSIEM data, you should periodically create EBS snapshots on an hourly, daily, or weekly basis and store them in S3. The EventDB is typically hosted as a RAID-0 volume of several EBS volumes, as described in Setting Up NFS Storage in AWS. In order to reliably snapshot these EBS volumes together, you can use a script, ec2-consistent-snapshot, to briefly freeze the volumes and create a snapshot. You an then use a second script, ec2-expire-snapshots, to schedule cron jobs to delete old snapshots that are no longer needed. CMDB is hosted on a much smaller EBS volume, and you can also use the same scripts to take snapshots of it.

You can find details of how download these scripts and set up periodic snapshots and expiration in this blog post: http://twigmon.blogspot.com/2013/09/installing-ec2-consistent-snapshot.html

You can download the scripts from these from these Github projects:

https://github.com/alestic/ec2-consistent-snapshot https://github.com/alestic/ec2-expire-snapshots

FortiSIEM Moving CMDB to a separate Database Host

$
0
0

Moving CMDB to a separate Database Host

It is desirable to move the CMDB (postgres) database to a separate host for the following reasons:

  1. In larger deployments, reduce the database server load on the supervisor node in order to allow more resources for application server and other backend modules
  2. Whenever high availability for CMDB data is desired, it is easier and cleaner to set up separate hosts with postgres replication that are managed separately than do this on the embedded postgres on the supervisor. This is especially true in AWS environment where AWS Postgresql Relational Database Service (RDS) is just a few clicks to set up a DB instance that replicates across availability zones and automatically does failover
Freshly Installed Supervisor

 

Install separate Postgresql DB servers or AWS RDS instance in Multi-AZ mode. Use Postgresql version 9.1 or greater. I’ll use the RDS
example in the remaining steps. For instance, let’s say the hostname of RDS in us-west-2 region is

phoenixdb.XXXXXX.us-west-2.rds.amazonaws.com on port 5432 with username ‘phoenix’, DB name ‘phoenixdb’ and password ‘YYYYYYYY’. You will need to allow super and worker nodes to be able to connect to port 5432 on the RDS service. You will have to change security groups to allow this

  1. Make sure the above RDS host is reachable from FortiSIEM supervisor
  2. Install FortiSIEM supervisor node and configure it as usual including adding a license
  3. Stop all the running services so that CMDB will not be modified further 5. Dump the CMDB data in the local postgres DB into a local file 6.  Import schema/data into the external postgres.
  4. Change phoenix_config.txt to add DB_SERVER_* info
  5. Change glassfish application server’s domain.xml to point to the external CMDB server
  6. Change phoenix_config.txt to remove checking for postgres process 10. Disable postgres from starting up

 

 

Production / Existing Supervisor
  1. Install and have the external postgres ready as described at the beginning of the previous section
  2. Take point-in-time snapshots of supervisor to revert back if you hit any issue
  3. Stop crond on super, and wait for phwatchdog to stop
  4. Stop Apache on super and all workers so that collectors start buffering events
  5. Shutdown the worker nodes while you move CMDB out
  6. Follow the instructions from “Freshly Installed Supervisor” to complete the steps
Related articles

FortiSIEM Windows Agent and Agent Manager Install

Moving CMDB to a separate Database Host

FortiSIEM Windows Agent and Agent Manager Install

FortiSIEM can discover and collect performance metrics and logs from Windows Servers in an agent less fashion via WMI. However agents are
needed when there is a need to collect richer data such as file integrity monitoring and from a large number of servers.

This section describes how to setup FortiSIEM Windows Agent and Agent Manager as part of FortiSIEM infrastructure.

 

 

FortiSIEM Windows Agent Pre-installation Notes

$
0
0
FortiSIEM Windows Agent Pre-installation Notes

Hardware and Software Requirements Windows Agents

Windows Agent Manager

Supported versions

Windows Agent

Windows Agent Manager

Communication Ports between Agent and Agent Manager

Licensing

When you purchase the Windows Agent Manager, you also purchase a set number of licenses that can be applied to the Windows devices you are monitoring. After you have set up and configured Windows Agent Manager, you can see the number of both Basic and Advanced licenses that are available and in use in your deployment by logging into your Supervisor node and going to Admin > License Management, where you will see an entry for Basic Windows Licenses Allowed/Used and Advanced Windows Licenses Allowed/Used. You can see how these licenses have been applied by going to Admin > Windows Agent Health. When you are logged into the Windows Agent Manager you can also see the number of available and assigned licenses on the Assign Licenses to Users page.

There are two types of licenses that you can associate with your Windows agent.

License

Type

Description
None An agent has been installed on the device, but no license is associated with it. This device will not be monitored until a license is applied to it.
Advanced The agent is licensed to monitor all activity on the device, including logs, installed software changes, and file/folder changes
Basic The agent is licensed to monitor only logs on the device

When applying licenses to agents, keep in mind that Advanced includes Basic, so if you have purchased a number of Advanced licenses, you could use all those licenses for the Basic purpose of monitoring logs.. For example, if you have purchased a total of 10 licenses, five of which are Advanced and five of which are Basic, you could apply all 10 licenses to your devices as Basic.

Feature License Type
Windows Security Logs Basic
Windows Application Logs Basic
Windows System Logs Basic
Windows DNS Logs Basic
Windows DHCP Logs Basic
IIS logs Basic
DFS logs Basic
Any Windows Log File Basic
Custom file monitoring Basic
File Integrity Monitoring Advanced
Installed Software Change Monitoring Advanced
Registry Change Monitoring Advanced
WMI output Monitoring Advanced
Power shell Output Monitoring Advanced
Hardware and Software Requirements

Windows Agents

Component Requirement Notes
CPU x86 or x64 (or compatible) at 2Ghz or higher  
Hard Disk 10 GB (minimum)  
Server OS Windows XP-SP3 and above

(Recommended)

 
Desktop OS Windows 7/8 Performance issues may occur due to limitations of desktop OS
RAM 1 GB for XP

2+GB for Windows Vista & above

/ Windows Server

 
Installed

Software

.NET Framework 4.0 PowerShell 2.0 or higher .NET Framework 4.0 can be downloaded from http://www.microsoft.com/enus/download/details.aspx?id=17718)

You can download PowerShell from Microsoft at http://www.microsoft.com/e n-us/download/details.aspx?id=4045.

Windows OS

Language

English  

Windows Agent Manager

Each Manager has been tested to handle up to 500 agents at an aggregate 7.5K events/sec.

Component Requirement Notes
CPU x86 or x64 (or compatible) at 2Ghz or higher  
Hard Disk 10 GB (minimum)  
Server OS Windows Server 2008 and above (Strongly recommended)  
Desktop OS Windows 7/8 (performance issues might occur) Performance issues may occur due to limitations of desktop OS
RAM For 32 bit OS, 2 GB for Windows 7 / 8 is a minimum

For 64 bit OS, 4 GB for Windows 7/8 and Windows Server 2008 / 2012 is a

minimum

 
Installed

Software

.NET Framework 4.5

SQL Server Express or SQL Server 2012

installed using “SQL Server Authentication Mode”

Power Shell 2.0 or higher

IIS 7 or higherinstalled

IIS 7, 7.5: ASP .NET feature must be enabled from Application Development Role Service of IIS  IIS 8.0+: ASP .NET 4.5 feature must be enabled from Application Development Role Service of IIS

.NET Framework 4.5 can be downloaded from http://www.microsoft.com/e

n-us/download/details.aspx?id=30653, and is already available on

Windows 8 and Windows Server 2012

You can download PowerShell from Microsoft at http://www.microsoft.com /en-us/download/details.aspx?id=4045.

SQL Server Express does not have any performance degradation compared to SQL Server 2012.

Windows

OS

Language

English  
Supported versions

Windows Agent

Windows 7

Windows 8

Windows XP SP3 or above

Windows Server 2003 Server

Windows Server 2008

Windows Server 2008 R2

Windows Server 2012

Windows Server 2012 R2

Windows Agent Manager

Windows Server 2008 R2 Windows Server 2012

Windows Server 2012 R2

Communication Ports between Agent and Agent Manager

TCP Port 443 (V1.1 on wards) and TCP Port 80 (V1.0) on Agent Manager for receiving events from Agents. Ports 135, 137, 139, 445 needed for NetBIOS based communication

Installing FortiSIEM Windows Agent Manager
Prerequisites
  1. Make sure that the ports needed for communication between Windows Agent and Agent Manager are open and the two systems can communicate
  2. For versions 1.1 and higher, Agent and Agent Manager communicate via HTTPS. For this reason, there is a special pre-requisite: Get your Common Name / Subject Name from IIS
    1. Logon to Windows Agent Manager
    2. Open IIS by going to Run, typing inetmgr and pressing enter
    3. Go to Default Web Site in the left pane
    4. Right click Default Web Site and select Edit Bindings.
    5. In Site Bindings dialog, check if you have https under Type column
    6. If https is available, then
      1. Select column corresponding to https and click on Edit
      2. In Edit Site Binding dialog, under SSL certificate section, click on .. button. iii. In Certificate dialog, under General tab, note the value of Issued to. This is your  Common Name / Subject Name
    7. If https is not available, then you need to bind the default web site with https.
      1. Import a New certificate. This can be done in one of two ways
        1. Either create a Self Signed Certificate as follows
          1. Open IIS by going to Run, typing inetmgr and pressing enter
          2. In the left pane, select computer name
          3. In the right pane, double click on Server Certificates
          4. In the Server Certificate section, click on Create Self-Signed Certificate... from the right pane
          5. In Create Self-Signed Certificate dialog, specify a friendly name for the certificate and click OK
          6. You will see your new certificate in the Server Certificates list
        2. Or, Import a third party certificate from a certification authority.
          1. Buy the certificate (.pfx or .cer file)
          2. Install the certificate file in your server
          3. Import the certificate in IIS
          4. Go to IIS. Select Computer name and in the right pane select Server Certificates
          5. If certificate is PFX File
            1. In Server Certificates section, click on .. in right pane
            2. In the Import Certificate dialog, browse to pfx file and put it in Certificate file(.pfx) box
            3. Give your pfx password and click Ok. Your certificate gets imported to IIS
          6. If certificate is CER File
            1. In Server Certificates section, click on Complete Certificate Request… in right pane
            2. In the Complete Certificate Request dialog, browse to CER file and put it in File name section
            3. Enter the friendly name, click Ok. Your certificate gets imported to IIS . b.  Bind your certificate to Default Web Site
          7. Open IIS by going to Run, typing inetmgr and pressing enter
          8. Right click on Default Web Site and select Edit Bindings… In Site Bindings… dialog, click on Add..
          9. In Add Site Binding dialog, select ‘https’ from Type drop down menu
          10. The Host name is optional but if you want to put it, then it must be the same as the certificate’s common name / Subject name
          11. Select your certificate from SSL certificate: drop down list
  • Click
  1. Your certificate is now bound to the Default Web Site.
  1. Enable TLS 1.2 for Windows Agent Manager 2.0 for operating with FortiSIEM Supervisor/Worker 4.6.3 and above. By default SSL3 / TLS 1.0 is enabled in Windows Server 2008-R2. Hence, before proceeding with the server installation, please enable TLS 1.2 manually as follows.
    1. Start elevated Command Prompt (i.e., with administrative privilege)
    2. Run the following commands sequentially as shown.
    3. Restart computer
Procedures
  1. On the machine where you want to install the manager, launch either the FortiSIEMServer-x86.MSI (for 32-bit Windows) or FortiSIEMSer ver-x64.MSI (for 64-bit Windows) installer.
  2. In the Welcome dialog, click Next.
  3. In the EULA dialog, agree to the Terms and Conditions, and then click Next.
  4. Specify the destination path for the installation, and then click Next.

By default the Windows Agent Manager will be installed at C:\Program Files\FortiSIEM\Server.

  1. Specify the destination path to install the client agent installation files, and then click Next.

By default these files will be installed at C:\FortiSIEM\Agent. The default location will be on the drive that has the most free storage space. This path will automatically become a shared location that you will access from the agent devices to install the agent software on them.

  1. In the Database Settings dialog,
    1. Select the database instance where metrics and logs from the Windows devices will be stored.
    2. Select whether you want to use Windows authentication, otherwise provide the login credentials that are needed to access the SQL Server instance where the database is located.
    3. Enter the path where FortiSIEM Agent Manager database will be stored. By default it is C:\FortiSIEM\Data
  2. Provide the path to the FortiSIEM Supervisor, Worker, or Collector that will receive information about your Windows devices. Click Next.
  3. In the Administrator Settings dialog, enter username and password credentials that you will use to log in to the Windows Agent Manager.

Both your username and password should be at least six characters long.

  1. (New in Release 1.1 for HTTPS communication between Agent and Agent Manager) Enter the common name/ subject name of the

SSL certificate created in pre-requisite step 2

  1. Click Install.
  2. When the installation completes, click Finish.
  3. You can now exit the installation process, or click Close Set Up and Run FortiSIEM to log into your FortiSIEM virtual appliance.

 

 

Installing FortiSIEM Windows Agent
Prerequisites
  1. Windows Agent and Agent Manager need to be able to communicate – agents need to access a path on the Agent Manager machine to install the agent software.
  2. Starting with Version 1.1, there is a special requirement if you want user information appended to file/directory change events. Typically file/directory change events do not have information about the user who made the change. To get this information, you have to do the following steps. Without this step, File monitoring events will not have user information. a. In Workgroup Environment:
    1. Go to Control Panel
    2. Open Administrative Tools
  • Double click on Local Security Policy
  1. Expand Advanced Audit Policy configuration in the left-pane
  2. Under Advanced Audit Policy, expand System Audit PoliciesLocal Group Policy Object
  3. Under System Audit Policies – Local Group Policy Object, select Object Access
  • Double-click on Audit File System in the right-pane
  • Audit File System Properties dialog opens. In this dialog, under Policy tab, select Configure the following audit events. Under this select both Success and Failure check boxes
  1. Click Apply and then OK
  2. In Active Directory Domain Environment: FortiSIEM Administrator can use Group Policies to propagate the above settings to the agent computers as follows:
  3. Go to Control Panel
  4. Open Administrative Tools
  • Click on Group Policy Management
  1. In Group Policy Management dialog, expand Forest:<domain_name> in the left-pane
  2. Under Forest:<domain_name>, expand Domains
  3. Under Domains, expand <domain_name>
  • Right-click on <domain_name> and click on ‘Create a GPO in this domain, and link it here…
  • New GPO dialog appears. Enter a new name (e.g., MyGPO) in Name text box. Press
  1. MyGPO appears under the expanded <domain_name> in left-pane. Click on MyGPO and click on the Scope tab in the right-pane.
  2. Under Scope tab, click on Add in Security filtering section
  3. Select User, Computer or Group dialog opens. In this dialog click the Object Types xii. Object Types dialog appears, uncheck all options and check the Computers option. Click OK.
  • Back in the Select User, Computer or Group dialog, enter the FortiSIEM Windows Agent computer names under Ente r the object name to select area. You can choose computer names by clicking the Advanced’ button and then in Advanced dialog clicking on the Find Now
  • Once the required computer name is specified, click OK and you will find the selected computer name under Security Filtering.
  1. Repeat steps (xi) – (xiv) for all the required computers running FortiSIEM Windows Agent. xvi. Right click on MyGPO in the left-pane and click on Edit. xvii.  Group Policy Management Editor In this dialog, expand Policies under Computer Configuration.
  • Go to Policies > Windows Settings > Security Settings > Advanced Audit Policy Configuration > Audit Policies > Object Access > Audit File System.
  • In the Audit File System Properties dialog, under Policy tab select Configure the following audit Under this, select both Success and Failure check boxes.
Procedure

Installing one agent

  1. Log into the machine where you want to install the agent software as an adminstrator.
  2. Navigate to the shared location on the Windows Agent Manager machine where you installed the agent installation files in Step 5 of Instal ling FortiSIEM Windows Agent Manager.

The default path is C:\FortiSIEM\Agent.

  1. In the shared location, double-click on the appropriate .MSI file to begin installation.

FortiSIEMAgent-x64.MSI  is for the 64-bit Agent,  while FortiSIEMAgent-x86.MSI is for the 32-bit Agent

  1. When the installation completes, go to Start > Administrative Tools > Services and make sure that the FortiSIEM Agent Service has a status of Started.

Installing multiple agents via Active Directory Group Policy

Multiple agents can be installed via GPO if all the computers are on the same domain.

  1. Log on to Domain Controller
  2. Create a separate Organization unit for containing all computers where FortiSIEM Windows Agent have to be installed.
    1. Go to Start > Administrative Tools > Active Directory Users and Computers
    2. Right click on the root Domain on the left side tree. Click New > Organizational Unit
    3. Provide a Name for the newly created Organizational Unit and click
    4. Verify that the Organizational Unit has been created.
  3. Assign computers to the new Organizational Unit.
    1. Click Computers under the domain. The list of computers will be displayed on the right pane
    2. Select a computer on the right pane. Right click and select Move and then select the new Organizational Unit. c. Click
  4. Create a new GPO
    1. Go to Start > Administrative Tools > Group Policy Management
    2. Under Domains, select the newly created Organization Unit
    3. Right click on the Organization Unit and select Create and Link a GPO here…
    4. Enter a Name for the new GPO and click OK.
    5. Verify that the new GPO is created under the chosen Organizational Unit
    6. Right click on the new GPO and click Edit. Left tree now shows Computer Configuration and User Configuration
    7. Under Computer Configuration, expand Software Settings.
    8. Click New > Package. Then go to AOWinAgt folder on the network folder. Select the Agent MSI you need – 32 bit or 64 bit. Click

OK.

  1. The selected MSI shows in the right pane under Group Policy Editor window
  2. For Deploy Software, select Assigned and click
  1. Update the GPO on Domain Controller
    1. Open a command prompt
    2. Run gpupdate /force
  2. Update GPO on Agents
    1. Log on to the computer
    2. Open a command prompt
    3. Run gpupdate
    4. Restart the computer
    5. You will see FortiSIEM Windows Agent installed after restart

Upgrade

Upgrade Overview
Upgrading from 3.7.6 to latest
  1. First upgrade to 4.2.1 following steps in here. This involves OS migration
  2. Upgrade from 4.2.1 to 4.3.1 following steps in here. This involves SVN migration
  3. Upgrade from 4.3.1 to 4.5.2. This is a regular upgrade – single node case and multi-node case
  4. Upgrade from 4.5.2 to 4.6.3 following steps in This involves TLS 1.2 upgrade.
  5. Upgrade from 4.6.3 to 4.7.1. This is a regular upgrade – single node case and multi-node case
Upgrading from 4.2.x to latest
  1. Upgrade to 4.3.1 following steps in here. This involves SVN migration.
  2. Upgrade from 4.3.1 to 4.5.2. This is a regular upgrade – single node case and multi-node case
  3. Upgrade from 4.5.2 to 4.6.3 following steps in here. This involves TLS 1.2 upgrade.
  4. Upgrade from 4.6.3 to 4.7.1. This is a regular upgrade – single node case and multi-node case
Upgrading from 4.3.1 to latest
  1. Upgrade from 4.3.1 to 4.5.2. This is a regular upgrade – single node case and multi-node case
  2. Upgrade from 4.5.2 to 4.6.3 following steps in This involves TLS 1.2 upgrade.
  3. Upgrade from 4.6.3 to 4.7.1. This is a regular upgrade – single node case and multi-node case
Upgrading from 4.3.3 to latest
  1. Do the special pre-upgrade step as in here.
  2. Upgrade to 4.5.2. This is a regular upgrade – single node case and multi-node case
  3. Upgrade from 4.5.2 to 4.6.3 following steps in This involves TLS 1.2 upgrade.
  4. Upgrade from 4.6.3 to 4.7.1. This is a regular upgrade – single node case and multi-node case
Upgrading from 4.4.x, 4.5.1 to latest
  1. Upgrade to 4.5.2. This is a regular upgrade – single node case and multi-node case
  2. Upgrade from 4.5.2 to 4.6.3 following steps in This involves TLS 1.2 upgrade.
  3. Upgrade from 4.6.3 to 4.7.1. This is a regular upgrade – single node case and multi-node case
Upgrading from 4.5.2 to latest
  1. Upgrade to 4.6.3 following steps in This involves TLS 1.2 upgrade.
  2. Upgrade from 4.6.3 to 4.7.1. This is a regular upgrade – single node case and multi-node case
Upgrading from 4.6.1 to latest
  1. Do the special pre-upgrade step as in
  2. Upgrade to 4.6.3 following steps in This involves TLS 1.2 upgrade.
  3. Upgrade from 4.6.3 to 4.7.1. This is a regular upgrade – single node case and multi-node case
Upgrading from 4.6.2 to latest
  1. Upgrade to 4.6.3 following steps in This involves TLS 1.2 upgrade.
  2. Upgrade from 4.6.3 to 4.7.1. This is a regular upgrade – single node case and multi-node case

Upgrading from 4.6.3 to latest

  1. Upgrade to 4.7.1. This is a regular upgrade – single node case and multi-node case

Upgrading Windows Agents

FortiSIEM Windows Agent Upgrade is covered in Upgrading FortiSIEM Windows Agent and Agent Manager

Migrating from 3.7.x versions to 4.2.1

The 4.2 version of FortiSIEM uses a new version of CentOS, and so upgrading to version 4.2 from pervious versions involves a migration from those versions to 4.2.x, rather than a typical upgrade. This process involves two steps:

  1. You have to migrate the 3.7.6 CMDB to a 4.2.1 CMDB on a 3.7.6 based system.
  2. The migrated 4.2.1 CMDB has to be imported into a 4.2.1 system.

Topics in this section cover the migration process for supported hypervisors for both migrations in-place and using staging systems. Using a stagi ng system requires more hardware, but minimizes downtime and CMDB migration risk compared to the in-place method. If you decide to use the in-place method, we strongly recommend that you take snapshots for recovery.

FortiSIEM Migrating VMware ESX-based Deployments

$
0
0
Migrating VMware ESX-based Deployments

The options for migrating VMware ESX deployments depend on whether you are using NFS for storage, and whether you choose to migrate in-place, or by using a staging system or rsync. Using the staging system requires more hardware, but minimizes downtime and CMDB migration risk compared to the in-place approach. The rsync method takes longer to complete because the event database has to be copied. If you use the i n-place method, then we strongly recommend that you take snapshots of the CDMB for recovery.

 

Migrating an ESX Deployment with Local Storage in Place

Overview

This migration process is for an FortiSIEM deployment with a single virtual appliance and the CMDB data stored on a local VMware disk, and where you intend to run a 4.2.x version on the same physical machine as the 3.7.x version, but as a new virtual machine. This process requires these steps:

Prerequisites

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor Registering Workers to the Supervisor

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

Use More Storage for Your 4.2.1 Virtual Appliance

Install the 4.2.1 virtual appliance on the same host as the 3.7.x version with a local disk that is larger than the original 3.7.x version. You will need the extra disk space for copying operations during the migration.

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

  1. Log in over SSH to your running 3.7.x virtual appliance as root.
  2. Change the directory to /root.
  3. Move or copy the migration script ao-db-migration-4.2.1.tar to /root.
  4. Untar the migration script.
  5. Run ls -al to check that root is the owner of the files ao-db-migration.sh and ao-db-migration-archiver.sh.
  6. For each AccelOps Supervisor, Worker, or Collector node, stop all backend processes by running the phtools
  7. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully created in the destination directory.
  8. Copy the opt-migration-*.tar file to /root.

This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.

  1. Run the migration script on the 3.7.x CMDB archive you created in step 7.

The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the migrated CMDB file will be kept.

  1. Make sure the migrated files were successfully created.
  2. Copy the migrated CMDB phoenixdb_migration_xyz file to the /root directory of your 4.2.1 virtual appliance This file will be used during the CMDB restoration process.

Removing the Local Disk from the 3.7.x Virtual Appliance

  1. Log in to your vSphere client.
  2. Select your 3.7.x virtual appliance and power it off.
  3. Open the Hardware properties for your virtual appliance.
  4. Select Hard disk 3, and then click Remove.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Adding the Local Disk to the 4.2.1 Virtual Appliance

  1. Log into your vSphere client.
  2. Select your 4.2.1 virtual appliance and power it off.
  3. Go the Hardware settings for your virtual appliance and select Hard disk 3.
  4. Click Remove.
  5. Click Add.
  6. For Device Type, select Hard Disk, and then click Next.
  7. Select Use an existing virtual disk, and then click Next.
  8. Browse to the location of the migrated virtual disk that was created by the migration script, and then click OK.
  9. Power on the virtual appliance.

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor

  1. In the vSphere client, power off the 3.7.x Supervisor.

The IP Address for the 3.7.x Supervisor will be transferred to the  4.2.1 Supervisor.

  1. Log in to the 3.7.x Supervisor as root over SSH.
  2. Run the vami_config_net

Your virtual appliance will reboot when the IP address change is complete.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully

 

 

 

I love what I do

$
0
0

Technology is amazing. It drastically reduces the size of the world. I love that I get to work with it every day. It’s just a little before 1 AM here in Montgomery, Alabama. I just finished assisting a friend in Saudi Arabia with their FortiGate issue. This friend I met through this site. Think about that for a second. Small town Alabama boy getting to meet and help people from all over the world.

It’s an amazing time we live in. Good night everyone! See you in the morning!

Migrating an ESX Local Disk-based Deployment Using an rsync Tool

$
0
0

Migrating an ESX Local Disk-based Deployment Using an rsync Tool

Overview

This migration process is for an FortiSIEM deployment with a single virtual appliance and the CMDB data stored on a local VMware disk, and where you intend to run the 4.2.1 version on a different physical machine as the 3.7.x version. This process requires these steps:

Overview

Prerequisites

Copy the 3.7.x CMDB to a 4.2.1 Virtual Appliance Using rsync

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor Registering Workers to the Supervisor

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

  1. Log in to the 4.2.1 virtual appliance as root.
  2. Check the disk size in the remote system to make sure that there is enough space for the database to be copied over.
  3. Copy the directory /data from the 3.7.x virtual appliance to the 4.2.1 virtual appliance using the rsync tool.
  4. After copying is complete, make sure that the size of the event database is identical to the 3.7.x system.

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

  1. Log in over SSH to your running 3.7.x virtual appliance as root.
  2. Change the directory to /root.
  3. Move or copy the migration script ao-db-migration-4.2.1.tar to /root.
  4. Untar the migration script.
  5. Run ls -al to check that root is the owner of the files ao-db-migration.sh and ao-db-migration-archiver.sh.
  6. For each AccelOps Supervisor, Worker, or Collector node, stop all backend processes by running the phtools
  7. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully created in the destination directory.
  8. Copy the opt-migration-*.tar file to /root.

This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.

  1. Run the migration script on the 3.7.x CMDB archive you created in step 7.

The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the migrated CMDB file will be kept.

  1. Make sure the migrated files were successfully created.
  2. Copy the migrated CMDB phoenixdb_migration_xyz file to the /root directory of your 4.2.1 virtual appliance This file will be used during the CMDB restoration process.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor

  1. In the vSphere client, power off the 3.7.x Supervisor.

The IP Address for the 3.7.x Supervisor will be transferred to the  4.2.1 Supervisor.

  1. Log in to the 3.7.x Supervisor as root over SSH.
  2. Run the vami_config_net

Your virtual appliance will reboot when the IP address change is complete.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully

 


Migrating an ESX NFS-based Deployment in Place

$
0
0

Migrating an ESX NFS-based Deployment in Place

Overview

In this migration method, the production FortiSIEM systems are upgraded in-place, meaning that the production 3.7.x virtual appliance is stopped and used for migrating the CMDB to the 4.2.1 virtual appliance. The advantage of this approach is that no extra hardware is needed, while the disadvantage is extended downtime during the CMDB archive and upgrade process. During this downtime events are not lost but are buffered at the collector. However, incidents are not triggered while events are buffered. Prior to the CDMB upgrade process, you might want to take a snapshot of CMDB to use as a backup if needed.

The steps for this process are:

Overview

Prerequisites

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Mounting the NFS Storage on Supervisors and Workers

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor Registering Workers to the Supervisor

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

  1. Log in over SSH to your running 3.7.x virtual appliance as root.
  2. Change the directory to /root.
  3. Move or copy the migration script ao-db-migration-4.2.1.tar to /root.
  4. Untar the migration script.
  5. Run ls -al to check that root is the owner of the files ao-db-migration.sh and ao-db-migration-archiver.sh.
  6. For each AccelOps Supervisor, Worker, or Collector node, stop all backend processes by running the phtools
  7. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully created in the destination directory.
  8. Copy the opt-migration-*.tar file to /root.

This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.

  1. Run the migration script on the 3.7.x CMDB archive you created in step 7.

The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the migrated CMDB file will be kept.

  1. Make sure the migrated files were successfully created.
  2. Copy the migrated CMDB phoenixdb_migration_xyz file to the /root directory of your 4.2.1 virtual appliance

This file will be used during the CMDB restoration process.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Mounting the NFS Storage on Supervisors and Workers

Follow this process for each Supervisor and Worker in your deployment.

  1. Log in to your virtual appliance as root over SSH.
  2. Run the mount command to check the mount location.
  3. Change to the 3.7.x mount path location in the /etc/fstab file on the Supervisor or Workers.
  4. Reboot the Supervisor or Worker.

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor

  1. In the vSphere client, power off the 3.7.x Supervisor.

The IP Address for the 3.7.x Supervisor will be transferred to the  4.2.1 Supervisor.

  1. Log in to the 3.7.x Supervisor as root over SSH.
  2. Run the vami_config_net

Your virtual appliance will reboot when the IP address change is complete.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully

 

Migrating an ESX NFS-based Deployment via a Staging System

$
0
0

Migrating an ESX NFS-based Deployment via a Staging System

Overview

In this migration method, the production 3.7.x FortiSIEM systems are left untouched. A separate mirror image 3.7.x system is first created, and then upgraded to 4.2.1. The NFS storage is mounted to the upgraded 4.2.1 system, and the collectors are redirected to the upgraded 4.2.1 system. The upgraded 4.2.1 system now becomes the production system, while the old 3.7.6 system can be decommissioned. The collectors can then be upgraded one by one. The advantages of this method is minimal downtime in which incidents aren’t triggered, and no upgrade risk. If for some reason the upgrade fails, it can be aborted without any risk to your production CMDB data. The disadvantages of this method are the requirement for hardware to set up the mirror 3.7.x mirror system, and longer time to complete the upgrade because of the time needed to set up the mirror system.

The steps in this process are:

Overview

Prerequisites

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Mounting the NFS Storage on Supervisors and Workers

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor Registering Workers to the Supervisor

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

Create the 3.7.x CMDB Archive

  1. Log in to your running 3.7.x production AccelOp virtual appliance as root.
  2. Change the directory to /root.
  3. Copy the migration script ao-db-migration-4.2.1.tar to the /root
  4. Untar the migration script.
  5. Make sure that the owner of ao-db-migration.sh and ao-db-migration-archiver.sh files is root.
  6. Run the archive script, specifying the directory where you want the archive file to be created.
  7. Check that the archived files were successfully created in the destination directory.

You should see two files, cmdb-migration-*.tar, which will be used to migrate the 3.7.x CMDB, and opt-migration-*.tar, which contains files stored outside of CMDM that will be needed to restore the upgraded CMDB to your new 4.2.1 virtual appliance.

  1. Copy the cmdb-migration-*.tar file to the 3.7.x staging Supervisor, using the same directory name you used in Step 6.
  2. Copy the opt-migration-*.tar file to the /root directory of the 4.2.1 Supervisor.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Mounting the NFS Storage on Supervisors and Workers

Follow this process for each Supervisor and Worker in your deployment.

  1. Log in to your virtual appliance as root over SSH.
  2. Run the mount command to check the mount location.
  3. Change to the 3.7.x mount path location in the /etc/fstab file on the Supervisor or Workers.
  4. Reboot the Supervisor or Worker.

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor

  1. In the vSphere client, power off the 3.7.x Supervisor.

The IP Address for the 3.7.x Supervisor will be transferred to the  4.2.1 Supervisor.

  1. Log in to the 3.7.x Supervisor as root over SSH.
  2. Run the vami_config_net

Your virtual appliance will reboot when the IP address change is complete.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully

 

Migrating AWS EC2 Deployments

$
0
0

Migrating AWS EC2 Deployments

This section covers migrating FortiSIEM AWS EC2 based Virtual Appliances from 3.7.x to 4.2.1. Since FortiSIEM 4.2.1 has new CentOS version, the procedure is unlike a regular upgrade (say from 3.7.5 to 3.7.6) – certain special procedures have to be followed.

Very broadly, 3.7.6 CMDB have to be first migrated to a 4.2.1 CMDB on a 3.7.6 based system and then the migrated 4.2.1 CMDB has to be imported to a 4.2.1 system.

There are 4 choices based on

NFS or a single Virtual appliance based deployment

In-place or Staging based method is chosen for data migration

The various methods are explained later, but stated simply, staging approach take more hardware but minimizes downtime and CMDB migration risk compared to the in-place approach.

If in-place method is to be deployed, then a snapshot method is highly recommended for recovery purposes.

 

Note: Internet access is needed for migration to succeed. A third party library needs to access the schema website.

Migrating an AWS EC2 Local Disk-based Deployment

Overview

This migration process is for an FortiSIEM deployment with a single virtual appliance and the CMDB data stored on a local AWS volume, and where you intend to run a 4.2.x version on the same physical machine as the 3.7.x version, but as a new virtual machine.

Overview

Prerequisites

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Change Local Volumes for Your AWS Instances

Change the IP Addresses Associated with Your Virtual Appliances

Registering Workers to the Supervisor

Setting the 4.2.1 SVN Password to the 3.7.x Password

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

  1. Log in over SSH to your running 3.7.x virtual appliance as root.
  2. Change the directory to /root.
  3. Move or copy the migration script ao-db-migration-4.2.1.tar to /root.
  4. Untar the migration script.
  5. Run ls -al to check that root is the owner of the files ao-db-migration.sh and ao-db-migration-archiver.sh.
  6. For each AccelOps Supervisor, Worker, or Collector node, stop all backend processes by running the phtools
  7. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully created in the destination directory.
  8. Copy the opt-migration-*.tar file to /root.

This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.

  1. Run the migration script on the 3.7.x CMDB archive you created in step 7.

The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the migrated CMDB file will be kept.

  1. Make sure the migrated files were successfully created.
  2. Copy the migrated CMDB phoenixdb_migration_xyz file to the /root directory of your 4.2.1 virtual appliance This file will be used during the CMDB restoration process.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Change Local Volumes for Your AWS Instances

  1. Log in to AWS EC2 dashboard and power off your 4.2.1 virtual appliance.
  2. In the Volumes table, find your production 3.7.x volume and tag it so you can identify it later, while also making a note of its ID.

For instance, 3.7.x_data_volume.

  1. Detach the volume.
  2. In the Volumes tab, find your 4.2.1 volume, and Detach
  3. Power on your 4.2.1. virtual appliance.
  • Stop all back-end processes and change the SVN URL and Server IP address in database by running these commands.

Change the IP Addresses Associated with Your Virtual Appliances

  1. Log in to the AWS EC2 dashboard.
  2. Click Elastic IPS, and then select the public IP associated with your 4.2.1 virtual appliance.
  3. Click Disassociate Address, and then Yes, Disassociate.
  4. In Elastic IPs, select the IP address associated with your 3.7.x virtual appliance.
  5. Click Disassociate Address, and then Yes, Disassociate.
  6. In Elastic IPs, select the production public IP of your 3.7.x virtual appliance, and click Associate Address to associate it with your 4.2.1 virtual appliance.

The virtual appliance will reboot automatically after the IP address is changed.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully

Migrating an AWS EC2 NFS-based Deployment in Place

$
0
0
Migrating an AWS EC2 NFS-based Deployment in Place

Overview

In this migration method, the production FortiSIEM systems are upgraded in-place, meaning that the production 3.7.x virtual appliance is stopped and used for migrating the CMDB to the 4.2.1 virtual appliance. The advantage of this approach is that no extra hardware is needed, while the disadvantage is extended downtime during the CMDB archive and upgrade process. During this downtime events are not lost but are buffered at the collector. However, incidents are not triggered while events are buffered. Prior to the CDMB upgrade process, you might want to take a snapshot of CMDB to use as a backup if needed.

The steps for this process are:

Overview

Prerequisites

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Mounting the NFS Storage on Supervisors and Workers

Change the SVN URL and Server IP Address

Change the IP Addresses Associated with Your Virtual Appliances

Registering Workers to the Supervisor

Setting the 4.2.1 SVN Password to the 3.7.x Password

 

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

  1. Log in over SSH to your running 3.7.x virtual appliance as root.
  2. Change the directory to /root.
  3. Move or copy the migration script ao-db-migration-4.2.1.tar to /root.
  4. Untar the migration script.
  5. Run ls -al to check that root is the owner of the files ao-db-migration.sh and ao-db-migration-archiver.sh.
  6. For each AccelOps Supervisor, Worker, or Collector node, stop all backend processes by running the phtools
  7. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully created in the destination directory.
  8. Copy the opt-migration-*.tar file to /root.

This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.

  1. Run the migration script on the 3.7.x CMDB archive you created in step 7.

The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the migrated CMDB file will be kept.

  1. Make sure the migrated files were successfully created.
  2. Copy the migrated CMDB phoenixdb_migration_xyz file to the /root directory of your 4.2.1 virtual appliance This file will be used during the CMDB restoration process.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Mounting the NFS Storage on Supervisors and Workers

Follow this process for each Supervisor and Worker in your deployment.

  1. Log in to your virtual appliance as root over SSH.
  2. Run the mount command to check the mount location.
  3. Change to the 3.7.x mount path location in the /etc/fstab file on the Supervisor or Workers.
  4. Reboot the Supervisor or Worker.

Change the SVN URL and Server IP Address

Run these commands.

 

Change the IP Addresses Associated with Your Virtual Appliances

 

  1. Log in to the AWS EC2 dashboard.
  2. Click Elastic IPS, and then select the public IP associated with your 4.2.1 virtual appliance.
  3. Click Disassociate Address, and then Yes, Disassociate.
  4. In Elastic IPs, select the IP address associated with your 3.7.x virtual appliance.
  5. Click Disassociate Address, and then Yes, Disassociate.
  6. In Elastic IPs, select the production public IP of your 3.7.x virtual appliance, and click Associate Address to associate it with your 4.2.1 virtual appliance.

The virtual appliance will reboot automatically after the IP address is changed.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully

 

Migrating an AWS EC2 NFS-based Deployment via a Staging System

$
0
0
Migrating an AWS EC2 NFS-based Deployment via a Staging System

Overview

Overview

Prerequisites

Create the 3.7.x CMDB Archive

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Mounting the NFS Storage on Supervisors and Workers

Change the IP Addresses Associated with Your Virtual Appliances

Registering Workers to the Supervisor

Setting the 4.2.1 SVN Password to the 3.7.x Password

In this migration method, the production 3.7.x FortiSIEM systems are left untouched. A separate mirror image 3.7.x system is first created, and then upgraded to 4.2.1. The NFS storage is mounted to the upgraded 4.2.1 system, and the collectors are redirected to the upgraded 4.2.1 system. The upgraded 4.2.1 system now becomes the production system, while the old 3.7.6 system can be decommissioned. The collectors can then be upgraded one by one. The advantages of this method is minimal downtime in which incidents aren’t triggered, and no upgrade risk. If for some reason the upgrade fails, it can be aborted without any risk to your production CMDB data. The disadvantages of this method are the requirement for hardware to set up the mirror 3.7.x mirror system, and longer time to complete the upgrade because of the time needed to set up the mirror system.

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

Create the 3.7.x CMDB Archive

  1. Log in to your running 3.7.x production AccelOp virtual appliance as root.
  2. Change the directory to /root.
  3. Copy the migration script ao-db-migration-4.2.1.tar to the /root
  4. Untar the migration script.
  5. Make sure that the owner of ao-db-migration.sh and ao-db-migration-archiver.sh files is root.
  6. Run the archive script, specifying the directory where you want the archive file to be created.
  7. Check that the archived files were successfully created in the destination directory.

You should see two files, cmdb-migration-*.tar, which will be used to migrate the 3.7.x CMDB, and opt-migration-*.tar, which contains files stored outside of CMDM that will be needed to restore the upgraded CMDB to your new 4.2.1 virtual appliance.

  1. Copy the cmdb-migration-*.tar file to the 3.7.x staging Supervisor, using the same directory name you used in Step 6.
  2. Copy the opt-migration-*.tar file to the /root directory of the 4.2.1 Supervisor.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Mounting the NFS Storage on Supervisors and Workers

Follow this process for each Supervisor and Worker in your deployment.

  1. Log in to your virtual appliance as root over SSH.
  2. Run the mount command to check the mount location.
  3. Change to the 3.7.x mount path location in the /etc/fstab file on the Supervisor or Workers.
  4. Reboot the Supervisor or Worker.

Change the IP Addresses Associated with Your Virtual Appliances

  1. Log in to the AWS EC2 dashboard.
  2. Click Elastic IPS, and then select the public IP associated with your 4.2.1 virtual appliance.
  3. Click Disassociate Address, and then Yes, Disassociate.
  4. In Elastic IPs, select the IP address associated with your 3.7.x virtual appliance.
  5. Click Disassociate Address, and then Yes, Disassociate.
  6. In Elastic IPs, select the production public IP of your 3.7.x virtual appliance, and click Associate Address to associate it with your 4.2.1 virtual appliance.

The virtual appliance will reboot automatically after the IP address is changed.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully

Migrating KVM-based deployments

$
0
0

Migrating KVM-based deployments

This section covers migrating FortiSIEM KVM based Virtual Appliances from 3.7.x to 4.2.1. Since FortiSIEM 4.2.1 has new CentOS version, the procedure is unlike a regular upgrade (say from 3.7.5 to 3.7.6) – certain special procedures have to be followed.

Very broadly, 3.7.6 CMDB have to be first migrated to a 4.2.1 CMDB on a 3.7.6 based system and then the migrated 4.2.1 CMDB has to be imported to a 4.2.1 system.

There are 4 choices based on

NFS or a single Virtual appliance based deployment

In-place or Staging or rsync based method is chosen for data migration

The various methods are explained later, but stated simply

Staging approach take more hardware but minimizes downtime and CMDB migration risk compared to the in-place approach rsync method takes longer to finish as event database has to be copied

If in-place method is to be deployed, then a snapshot method is highly recommended for recovery purposes.

 

Note: Internet access is needed for migration to succeed. A third party library needs to access the schema website.

Migrating a KVM Local Disc-based Deployment In Place

$
0
0
Migrating a KVM Local Disc-based Deployment In Place

Overview

This migration process is for an FortiSIEM deployment with a single virtual appliance and the CMDB data stored on a local VMware disk, and where you intend to run a 4.2.x version on the same physical machine as the 3.7.x version, but as a new virtual machine. This process requires these steps:

Overview

Prerequisites

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor Registering Workers to the Supervisor

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

Use More Storage for Your 4.2.1 Virtual Appliance

Install the 4.2.1 virtual appliance on the same host as the 3.7.x version with a local disk that is larger than the original 3.7.x version. You will need the extra disk space for copying operations during the migration.

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

  1. Log in over SSH to your running 3.7.x virtual appliance as root.
  2. Change the directory to /root.
  3. Move or copy the migration script ao-db-migration-4.2.1.tar to /root.
  4. Untar the migration script.
  5. Run ls -al to check that root is the owner of the files ao-db-migration.sh and ao-db-migration-archiver.sh.
  6. For each AccelOps Supervisor, Worker, or Collector node, stop all backend processes by running the phtools
  7. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully created in the destination directory.
  8. Copy the opt-migration-*.tar file to /root.

This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.

  1. Run the migration script on the 3.7.x CMDB archive you created in step 7.

The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the migrated CMDB file will be kept.

  1. Make sure the migrated files were successfully created.
  2. Copy the migrated CMDB phoenixdb_migration_xyz file to the /root directory of your 4.2.1 virtual appliance This file will be used during the CMDB restoration process.

Removing the Local Disk from the 3.7.x Virtual Appliance

  1. Log in to your vSphere client.
  2. Select your 3.7.x virtual appliance and power it off.
  3. Open the Hardware properties for your virtual appliance.
  4. Select IDE Disk 2, and then click Remove.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Adding the Local Disk to the 4.2.1 Virtual Appliance

  1. Log in to Virtual Machine Manager.
  2. Select your 4.2.1 virtual appliance and power it off.
  3. Go the Hardware settings for your virtual appliance and select IDE Disk 3.
  4. Click Remove.
  5. Click Add Hardware.
  6. Select
  7. Select the option to use managed or existing storage, and then browse to the location of the detached 3.7.x disk.
  8. Click Finish.
  9. Select Use an existing virtual disk, and then click Next.
  10. Browse to the location of the migrated virtual disk that was created by the migration script, and then click OK.
  11. Power on the virtual appliance.

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor

  1. In the vSphere client, power off the 3.7.x Supervisor.

The IP Address for the 3.7.x Supervisor will be transferred to the  4.2.1 Supervisor.

  1. Log in to the 3.7.x Supervisor as root over SSH.
  2. Run the vami_config_net

Your virtual appliance will reboot when the IP address change is complete.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully

 


Migrating a KVM Local Disk-based Deployment using an RSYNC Tool

$
0
0

Migrating a KVM Local Disk-based Deployment using an RSYNC Tool

Overview

This migration process is for an FortiSIEM deployment with a single virtual appliance and the CMDB data stored on a local VMware disk, and where you intend to run the 4.2.1 version on a different physical machine as the 3.7.x version. This process requires these steps:

Overview

Prerequisites

Copy the 3.7.x CMDB to a 4.2.1 Virtual Appliance Using rsync

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor Registering Workers to the Supervisor

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

  1. Log in to the 4.2.1 virtual appliance as root.
  2. Check the disk size in the remote system to make sure that there is enough space for the database to be copied over.
  3. Copy the directory /data from the 3.7.x virtual appliance to the 4.2.1 virtual appliance using the rsync tool.
  4. After copying is complete, make sure that the size of the event database is identical to the 3.7.x system.

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

  1. Log in over SSH to your running 3.7.x virtual appliance as root.
  2. Change the directory to /root.
  3. Move or copy the migration script ao-db-migration-4.2.1.tar to /root.
  4. Untar the migration script.
  5. Run ls -al to check that root is the owner of the files ao-db-migration.sh and ao-db-migration-archiver.sh.
  6. For each AccelOps Supervisor, Worker, or Collector node, stop all backend processes by running the phtools
  7. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully created in the destination directory.
  8. Copy the opt-migration-*.tar file to /root.

This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.

  1. Run the migration script on the 3.7.x CMDB archive you created in step 7.

The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the migrated CMDB file will be kept.

  1. Make sure the migrated files were successfully created.
  2. Copy the migrated CMDB phoenixdb_migration_xyz file to the /root directory of your 4.2.1 virtual appliance This file will be used during the CMDB restoration process.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor

  1. In the vSphere client, power off the 3.7.x Supervisor.

The IP Address for the 3.7.x Supervisor will be transferred to the  4.2.1 Supervisor.

  1. Log in to the 3.7.x Supervisor as root over SSH.
  2. Run the vami_config_net

Your virtual appliance will reboot when the IP address change is complete.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully

 

Migrating a KVM NFS-based Deployment In Place

$
0
0

Migrating a KVM NFS-based Deployment In Place

Overview

In this migration method, the production FortiSIEM systems are upgraded in-place, meaning that the production 3.7.x virtual appliance is stopped and used for migrating the CMDB to the 4.2.1 virtual appliance. The advantage of this approach is that no extra hardware is needed, while the disadvantage is extended downtime during the CMDB archive and upgrade process. During this downtime events are not lost but are buffered at the collector. However, incidents are not triggered while events are buffered. Prior to the CDMB upgrade process, you might want to take a snapshot of CMDB to use as a backup if needed.

The steps for this process are:

Overview

Prerequisites

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Mounting the NFS Storage on Supervisors and Workers

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor Registering Workers to the Supervisor

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

Upgrading the 3.7.x CMDB to 4.2.1 CMDB

  1. Log in over SSH to your running 3.7.x virtual appliance as root.
  2. Change the directory to /root.
  3. Move or copy the migration script ao-db-migration-4.2.1.tar to /root.
  4. Untar the migration script.
  5. Run ls -al to check that root is the owner of the files ao-db-migration.sh and ao-db-migration-archiver.sh.
  6. For each AccelOps Supervisor, Worker, or Collector node, stop all backend processes by running the phtools
  7. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully created in the destination directory.
  8. Copy the opt-migration-*.tar file to /root.

This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.

  1. Run the migration script on the 3.7.x CMDB archive you created in step 7.

The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the migrated CMDB file will be kept.

  1. Make sure the migrated files were successfully created.
  2. Copy the migrated CMDB phoenixdb_migration_xyz file to the /root directory of your 4.2.1 virtual appliance

This file will be used during the CMDB restoration process.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Mounting the NFS Storage on Supervisors and Workers

Follow this process for each Supervisor and Worker in your deployment.

  1. Log in to your virtual appliance as root over SSH.
  2. Run the mount command to check the mount location.
  3. Change to the 3.7.x mount path location in the /etc/fstab file on the Supervisor or Workers.
  4. Reboot the Supervisor or Worker.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully

Migrating a KVM NFS-based Deployment via a Staging System

$
0
0

Migrating a KVM NFS-based Deployment via a Staging System

Overview

In this migration method, the production 3.7.x FortiSIEM systems are left untouched. A separate mirror image 3.7.x system is first created, and then upgraded to 4.2.1. The NFS storage is mounted to the upgraded 4.2.1 system, and the collectors are redirected to the upgraded 4.2.1 system. The upgraded 4.2.1 system now becomes the production system, while the old 3.7.6 system can be decommissioned. The collectors can then be upgraded one by one. The advantages of this method is minimal downtime in which incidents aren’t triggered, and no upgrade risk. If for some reason the upgrade fails, it can be aborted without any risk to your production CMDB data. The disadvantages of this method are the requirement for hardware to set up the mirror 3.7.x mirror system, and longer time to complete the upgrade because of the time needed to set up the mirror system.

The steps in this process are:

Overview

Prerequisites

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

Mounting the NFS Storage on Supervisors and Workers

Assigning the 3.7.x Supervisor’s IP Address to the 4.2.1 Supervisor Registering Workers to the Supervisor

Prerequisites

Contact AccelOps Support to reset your license

Take a snapshot of your 3.7.x installation for recovery purposes if needed

Make sure the 3.7.x virtual appliance has Internet access

Download the 4.2.1 migration scripts (ao-db-migration-4.2.1.tar). You will need the Username and Password associated with your AccelOps license to access the scripts.

Create the 3.7.x CMDB Archive

  1. Log in to your running 3.7.x production AccelOp virtual appliance as root.
  2. Change the directory to /root.
  3. Copy the migration script ao-db-migration-4.2.1.tar to the /root
  4. Untar the migration script.
  5. Make sure that the owner of ao-db-migration.sh and ao-db-migration-archiver.sh files is root.
  6. Run the archive script, specifying the directory where you want the archive file to be created.
  7. Check that the archived files were successfully created in the destination directory.

You should see two files, cmdb-migration-*.tar, which will be used to migrate the 3.7.x CMDB, and opt-migration-*.tar, which contains files stored outside of CMDM that will be needed to restore the upgraded CMDB to your new 4.2.1 virtual appliance.

  1. Copy the cmdb-migration-*.tar file to the 3.7.x staging Supervisor, using the same directory name you used in Step 6.
  2. Copy the opt-migration-*.tar file to the /root directory of the 4.2.1 Supervisor.

Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance

  1. Log in to your 4.2.1 virtual appliance as root.
  2. Change the directory to /opt/phoenix/deployment/.
  3. Run the post-ao-db-migration.sh script with the 3.7.x migration files phoenixdb_migration_xyz and opt-migration-*.ta r.
  4. When the migration script completes the virtual appliance will reboot.

Mounting the NFS Storage on Supervisors and Workers

Follow this process for each Supervisor and Worker in your deployment.

  1. Log in to your virtual appliance as root over SSH.
  2. Run the mount command to check the mount location.
  3. Change to the 3.7.x mount path location in the /etc/fstab file on the Supervisor or Workers.
  4. Reboot the Supervisor or Worker.

Registering Workers to the Supervisor

  1. Log in to the Supervisor as admin.
  2. Go to Admin > License Management.
  3. Under VA Information, click Add, and add the Worker.
  4. Under Admin > Collector Health and Cloud Health, check that the health of the virtual appliances is normal.

Setting the 4.2.1 SVN Password to the 3.7.x Password

  1. Log in to the 4.2.1 Supervisor as root over SSH.
  2. Change the directory to /opt/phoenix/deployment/jumpbox.
  3. Run the SVN password reset script ./phsetsvnpwd.sh
  4. Enter the following full admin credential to reset SVN password

Organization: Super

User: admin

Password:****

Migration is now complete – Make sure all devices, user created rules, reports, dashboards are migrated successfully

Migrating Collectors

  1. After migrating all your Supervisors and Workers to 4.2.1, install the 4.2.1 Collectors.
  2. SSH to the 3.7.x Collector as root.
  3. Change the directory to /opt/phoenix/cache/parser/events.
  4. Copy the files from this directory to the same directory on the 4.2.1 system.
  5. Change the directory to /opt/phoenix/cache/parser/upload/svn.
  6. Copy the files from this directory to the same directory on the 4.2.1 system.
  7. Power off the 3.7.x Collector.
  8. SSH to the 4.2.1 Collector and change its IP address to the same as the 3.7.x Collector by running the vami_config_net
  9. In a browser, navigate to https://<4.2.1_Collector_IP_address>:5480 and fill in the administration information to complete the Collector setup/

 

 

Migrating the SVN Repository to a Separate Partition on a Local Disk

If you are using NFS storage, your SVN repository will be migrated to a local disk to improve performance and reliability. If you are using local storage only, the SVN repository will be moved out of the /data partition and into an /svn partition.

  1. Download ao-svn-migration.sh script from image server. (https://images.FortiSIEM.net/upgrade/va/4.3.1)
  2. Copy or move the ao-svn-migration.sh script to /root.
  3. Run ls -al to check that root is the owner of ao-svn-migration.sh.
  4. Run chmod to change the permissions on ao-svn-migration.sh to 755.
  5. Reboot the machine.
  6. Log into the Supervisor as root.
  7. When the script executes, you will be asked to confirm that you have 60GB of local storage available for the migration. When the script completes, you will see the message Upgrade Completed. SVN disk migration done.
  8. Run df –h to confirm that the /svn partition was completed.

Special pre-upgrade instruction for 4.3.3

  1. SSH as root into the Supervisor node
  2. Download “phupdateinstall-4.3.3.sh” script
  3. Copy or move the phupdateinstall-4.3.3.sh script to /root
  4. Run chmod to change the permissions on phupdateinstall-4.3.3.sh to 755

Special pre-upgrade instruction for 4.6.1

Instructions for Supervisor node

Run the following command as root.

Instructions for Collector  nodes

Run the following command as root on each collector prior to upgrading the collector from the GUI, or the upgrade will fail:

Enabling TLS 1.2 Patch On Old Collectors

Older AccelOps collectors 4.5.2 or earlier running JDK 1.7 do not have TLS 1.2 enabled. To enable them to communicate to FortiSIEM 4.6.3, follow these steps

  1. SSH to Collector and edit /opt/phoenix/bin/runJavaAgent.sh

 

Upgrading to 4.6.3 for TLS 1.2

$
0
0

Upgrading to 4.6.3 for TLS 1.2

Enforcing TLS 1.2 requires that the following steps be followed in strict order for upgrade to succeed. Additional steps for TLS 1.2 compatibility are marked in bold.

  1. Remove /etc/yum.repos.d/accelops* and Run “yum update” on Collectors, Worker(s), Supervisor and to get all TLS 1.2 related libraries up to date. Follow this yum update order Collectors Worker(s) 
  2. If your environment has a collector and it is running AccelOps 4.5.2 or earlier (with JDK 1.7), then first patch the Collector for TLS 1.2 compatibility (see here). This step is not required for Collectors running AccelOps 4.6.1 or later.
  3. Pre-upgrade step for upgrading Supervisor: Stop FortiSIEM (previously AccelOps) processes all Workers by running “phtools –stop ALL”.

Collectors can be up and running. This is to avoid build up of report files.

  1. Upgrade Supervisor following usual steps.
  2. If your environment has Worker nodes, Upgrade Workers following usual steps.
  3. If your environment has AccelOps Windows Agents, then upgrade Windows Agent Manager from 1.1 to 2.0. Note there are special pre-upgrade steps to enable TLS 1.2 (see here).
  4. If your environment has Collectors, upgrade Collectors following usual steps.

Setting Up the Image Server for Collector Upgrades

If you want to upgrade a multi-tenant deployment that includes Collectors, you must set up and then specify an image server that will be used as a repository for the Collector upgrade files. You can use a standard HTTP server for this purpose, but there is a preferred directory structure for the server. These instruction describe how to set up that structure, and then add a reference to the image server in your Supervisor node.

Setting Up the Image Server Directories
  1. Log into the image server with Admin rights.
  2. Create the directory images/collector/upgrade.
  3. Download the latest collector image upgrade file from https://images.FortiSIEM.net/upgrade/offline/co/latest4/ to images/collector/u

pgrade.

  1. Untar the file.
  2. Test the image server locations by entering one of the following addresses into a browser:

http://images.myserver.net/vms/collector/upgrade/latest/ https://images.myserver.net/vms/collector/upgrade/latest/

Setting the Image Server in the Supervisor
  1. Log in to your Supervisor node.
  2. Go to Admin > General Settings > System.
  3. Under Image Server, enter the URL or IP address for your image server.
  4. Enter the authentication credentials for your image server.
  5. Click Save.

Upgrading a FortiSIEM Single Node Deployment

$
0
0

Upgrading a FortiSIEM Single Node Deployment

These instructions cover the upgrade process for an FortiSIEM Enterprise deployment with a single Supervisor.

  1. Using SSH, log in to the FortiSIEM virtual appliance as the root user.

Your console will display the progress of the upgrade process.

  1. When the upgrade process is complete, your FortiSIEM virtual appliance will reboot.
  2. Log in to your virtual appliance, and in the Admin > Cloud Health page, check that you are running the upgraded version of FortiSIEM.

Upgrading a FortiSIEM Cluster Deployment

Overview

Upgrading Supervisors and Workers

Upgrading Collectors

Overview

Follow these steps while upgrading a VA cluster

  1. Shutdown all Workers. Collectors can be up and running.
  2. Upgrade Super first (while all workers are shutdown)
  3. After Super is up and running, upgrade worker one by one.
  4. Upgrade collectors

Step #1 prevents the accumulation of Report files while Super is not available during upgrade (#2). If these steps are not followed, Supervisor may not be able to come up after upgrade because of excessive unprocessed report fie accumulation.

Note: Both Super and Worker MUST be on the same FortiSIEM version, else various software modules may not work properly. However, Collectors can be in older versions – they will work except that they may not have the latest discovery and performance monitoring features in the Super/Worker versions. So FortiSIEM recommends that you also upgrade Collectors within a short period of time.

If you have Collectors in your deployment, make sure you have configured an image server to use as a repository for the Collector

Upgrading Supervisors and Workers

For both Supervisor and Worker nodes, follow the upgrade process described here, but be sure to upgrade the Supervisor node first.

  1. Using SSH, log in to the FortiSIEM virtual appliance as the root user.

Your console will display the progress of the upgrade process.

  1. When the upgrade process is complete, your FortiSIEM virtual appliance will reboot.
  2. Log in to your virtual appliance, and in the Admin > Cloud Health page, check that you are running the upgraded version of FortiSIEM.
Upgrading Collectors

The process for upgrading Collectors is similar to the process for Supervisors and Workers, but you must initiate the Collector process from the Supervisor.

  1. Log in to the Supervisor node as an administrator.
  2. Go to Admin > General Settings.
  3. Under Image Server Settings, enter the download path to the upgrade image, and the Username and Password associated with your license.
  4. Go to Admin > Collector Health.
  5. Click Download Image, and then click Yes to confirm the download.

As the download progresses you can click Refresh to check its status.

  1. When Finished appears in the Download Status column of the Collector Health page, click Install Image.

The upgrade process will begin, and when it completes, your virtual appliance will reboot. The amount of time it takes for the upgrade to complete depends on the network speed between your Supervisor node and the Collectors.

  1. When the upgrade is complete, make sure that your Collector is running the upgraded version of FortiSIEM.

Upgrading FortiSIEM Windows Agent and Agent Manager

Upgrade from V1.0 to V1.1

Upgrade from V1.1 to V2.0

Upgrade from V2.0 to V2.1

Upgrading Windows Agent License

Uninstalling Agents

Upgrade from V1.0 to V1.1

Version 1.0 and 1.1 Backward Incompatibility

Note 1.0 Agents and Agent Managers communicate only over HTTP while 1.1 Agents and Agent Managers communicate only over HTTPS. Subsequently, 1.1 Agents and Agent managers are not backward compatible with 1.0 Agents and Agent Managers. You have to completely upgrade the entire system of Agents and Agent Managers.

  1. Uninstall V1.0 Agents
  2. Close V1.0 Agent Manager Application. 3. Uninstall V1.0 Agent Manager
  3. Bind Default Website with HTTPS as described in Pre-requisite in Installing FortiSIEM Windows Agent Manager.
  4. Install V1.1 Agent Manager following Installing FortiSIEM Windows Agent Manager.
    1. In Database Settings dialog, enter the V1.0 database path as the “FortiSIEM Windows Agent Manager” SQL Server database path (Procedures Step 6 in Installing FortiSIEM Windows Agent Manager).
    2. Enter the same Administrator username and password (as the previous installation) in the Agent Manager Administrator account creation dialog
  5. Install V1.1 Agents
  6. Assign licenses again. Use the Export and Import feature.
Upgrade from V1.1 to V2.0
Windows Agent Manager
  1. Enable TLS 1.2 on Agent Manager – FortiSIEM Supervisor/Worker 4.6.3 and above enforces the use of TLS 1.2 for tighter security. However, by default only SSL3 / TLS 1.0 is enabled in Windows Server 2008-R2. Therefore, enable TLS 1.2 for Windows Agent Manager 2.0 for operating with FortiSIEM Supervisor/Worker 4.6.3 and above.
    1. Start elevated Command Prompt (i.e., with administrative privilege) to Windows Agent Manager 1.1
    2. Run the following commands sequentially as shown.
    3. Restart computer
  2. Uninstall Agent Manager 1.1
  3. Install SQL Server 2012-SP1 Feature Pack on Agent manager available at https://www.microsoft.com/en-in/download/details.aspx?id=35
    1. Select the language of your choice and mark the following two MSIs (choose x86 or x64 depending on your platform) for download:
      1. msi
      2. msi
    2. Click on the Download button to download those two MSIs. Then double-click on those MSIs to install those one by one.
  4. Install Agent Manager 2.0
    1. In Database Settings dialog, set the old database path as AccelOpsCAC database path.
    2. Enter the same Administrator username and password (as in the previous installation) in the new Agent Manager Administrator account creation dialog.
  5. Run Database migration utility to convert from 1.1 to 2.0
    1. Open a Command Prompt window
    2. Go to the installation directory (say, C:\Program Files\AccelOps\Server)
    3. Run AOUpdateManager.exe with script.zip as the command line parameter. You will find script.zip alongside the MSI.
  6. Register Windows Agent Manager 2.0 to FortiSIEM.
 Windows Agent
  1. Uninstall V1.0 Agents
  2. Install Agents
Upgrade from V2.0 to V2.1
Windows Agent Manager
  1. Uninstall Agent Manager 2.0
  2. Install Agent Manager 2.1
    1. In Database Settings dialog, set the old database path as AccelOpsCAC database path.
    2. Enter the same Administrator username and password (as in the previous installation) in the new Agent Manager Administrator account creation dialog.
  3. Run Database migration utility to convert from 2.0 to 2.1
    1. Open a Command Prompt window
    2. Go to the installation directory (say, C:\Program Files\AccelOps\Server)
    3. Run AOUpdateManager.exe with script.zip as the command line parameter. You will find script.zip alongside the MSI.
  4. Register Windows Agent Manager 2.1 to FortiSIEM.
 Windows Agent
  1. Uninstall V2.0 Agents
  2. Install 2.1 Agents
Upgrading Windows Agent License

Follow these steps if you have bought additional Windows Agent licenses or extended the term of the license.

  1. Login to AccelOps Supervisor using admin account
  2. Go to Admin > License Management and make sure that the license is updated
  3. Go to Admin > Setup Wizard > Windows Agent
  4. Edit each Windows Agent Manager entry and modify the agent count and license expiry date if needed

The new license will be automatically pushed to each Windows Agent Manager. You can now logon to each Windows Agent Manager and allocate the additional licenses if needed.

Uninstalling Agents
Single Agent

Simply uninstall like a regular Windows service

Multiple Agents using Group Policy

Go to the Group Policy you created during Agent installation. Right click and select Edit

In the Group Policy Management Editor, go to MyGPO > Computer Configuration > Policies > Software Settings > Software

Installation

Right click on FortiSIEM Windows Agent <version>

Click All Tasks > Remove

In Remove Software dialog, choose the option Immediately uninstall the software from users and computers. Then click OK.

The FortiSIEM Windows Agent <version> entry will disappear from the right pane. Close the Group Policy Management Editor. Force the group policy update

On Domain Controller > cmd, run gpupdate /force

On Agent server > cmd, run gpupdate Restart each Agent Computer to complete the uninstall.

Automatic OS Upgrades during Reboot

In order to patch CentOS and system packages for security updates as well as bugfixes and make the system on-par with a fresh installed FortiSIEM node, the following script is made available. Internet connectivity to CentOS mirrors should be working in order for the following script to be successful, otherwise the script will print and error and exit. This script is available on all nodes starting from 4.6.3: Supervisor, Workers,

Collectors, and Report Server

/opt/phoenix/phscripts/bin/phUpdateSystem.sh

The above script is also invoked during system boot up and is invoked in the following script:

/etc/init.d/phProvision.sh

The ensures that the node is up to date right after an upgrade and system reboot. If you are running a node that was first installed in an older release and upgraded to 4.6.3, then there are many OS/system packages that will be downloaded and installed the first time. Therefore, upgrade time is longer than usual. On subsequent upgrades and reboots, the updates will be small.

Nodes that are deployed in bandwidth constrained environments can disable this by commenting out the line phUpdateSystem.sh in phProvision.sh above. However, it is strongly recommended to keep this in-place to ensure that your node has security fixes from CentOS and minimize the risk of an exploit. Alternatively, in bandwidth constrained environments, you can deploy a freshly installed collector to ensure that security fixes are up to date.

Viewing all 2380 articles
Browse latest View live