Quantcast
Channel: Fortinet GURU
Viewing all articles
Browse latest Browse all 2380

FortiSIEM Installing in VMware ESX

$
0
0
Installing in VMware ESX

Setting the Network Time Protocol (NTP) for ESX

Installing a Supervisor, Worker, or Collector Node in ESX

Importing the Supervisor, Collector, or Worker Image into the ESX Server

Editing the Supervisor, Collector, or Worker Hardware Settings

Setting Local Storage for the Supervisor

Troubleshooting Tips for Supervisor Installations

Configuring the Supervisor, Worker, or Collector from the VM Console

Setting the Network Time Protocol (NTP) for ESX

It’s important that your Virtual Appliance has the accurate time in order to correlate events from multiple devices within the environment.

  1. Log in to your VMWare ESX server.
  2. Select your ESX host server.
  3. Click the Configuration
  4. Under Software, select Time Configuration.
  5. Click Properties.
  6. Select NTP Client Enabled.
  7. Click Options.
  8. Under General, select Start automatically.
  9. Under NTP Setting, click ...
  10. Enter the IP address of the NTP servers to use.

 

  1. Click Restart NTP service.
  2. Click OK to apply the changes.
Installing a Supervisor, Worker, or Collector Node in ESX

The basic process for installing an FortiSIEM Supervisor, Worker, or Collector node is the same. Since Worker nodes are only used in deployments that use NFS storage, you should first configure your Supervisor node to use NFS storage, and then configure your Worker node using the Supervisor NFS mount point as the mount point for the Worker. See Configuring NFS Storage for VMware ESX Server for more information. Collector nodes are only used in multi-tenant deployments, and need to be registered with a running Supervisor node.

Importing the Supervisor, Collector, or Worker Image into the ESX Server

Editing the Supervisor, Collector, or Worker Hardware Settings

Setting Local Storage for the Supervisor

Troubleshooting Tips for Supervisor Installations

When you’re finished with the specific hypervisor setup process, you need to complete your installation by following the steps described under Ge neral Installation.

 

 

 

 

Importing the Supervisor, Collector, or Worker Image into the ESX Server

  1. Download and uncompress the FortiSIEM OVA package from the FortiSIEM image server to the location where you want to install the image.
  2. Log in to the VMware vSphere Client.
  3. In the File menu, select Deploy OVF Template.
  4. Browse to the .ova file (example: FortiSIEM-VA-4.3.1.1145.ova) and select it.

On the OVF Details page you will see the product and file size information.

  1. Click Next.
  2. Click Accept to accept the “End User Licensing Agreement,” and then click Next.
  3. Enter a Name for the Supervisor or Worker, and then click Next.
  4. Select a Storage location for the installed file, and then click Next.

 

Running on VMWare ESX 6.0

If you are importing FortiSIEM VA, Collector, or Report Server images for VMWare on an ESXi 6.0 host, you will need to also “Upgrade VM Compatibility” to ESXi 6.0. If the VM is already started, you need to shutdown the VM, and use the “Actions” menu to do this. Due to some incompatibility created by VMWare, our collector VM processes restarted and the collector could not register with the supervisor. Similar problems are also likely to occur on supervisor, worker, or report server as well, so make sure their VM compatibilities are upgraded as well. More information about VM compatibility is available in the VMWare KB below:

https://kb.vmware.com/kb/1010675

Editing the Supervisor, Collector, or Worker Hardware Settings

Before you start the Supervisor, Worker, or Collector for the first time you need to make some changes to its hardware settings.

  1. In the VMware vSphere client, select the imported Supervisor, Worker, or Collector.
  2. Right-click on the node to open the Virtual Appliance Options menu, and then select Edit Settings… .
  3. Select the Hardware tab, and check that Memory is set to at least 16 GB and CPUs is set to 8.

Setting Local Storage for the Supervisor

Using NFS Storage

You can install the Supervisor using either native ESX storage or NFS storage. These instructions are for creating native EXS storage. See Configuring NFS Storage for VMware ESX Server for more information. If you are using NFS storage, you will set the IP address of the NFS server during Step 15 of the Configuring the Supervisor, Worker, or Collector from the VM Console process.

  1. On Hardware tab, click Add.
  2. In the Add Hardware dialog, select Hard Disk, and then click Next.
  3. Select Create a new virtual disk, and then click Next.
  4. Check that these selections are made in the Create a Disk dialog:
Disk Size 300GB

See the Hardware Requirements for Supervisor and Worker Nodes in the Browser Support and Hardware Requirements topic for more specific disk size recommendations based on Overall EPS.

Disk

Provisioning

Thick Provision Lazy Zeroed
Location Store to the Virtual Machine
  1. In the Advanced Options dialog, make sure that the Independent option for Mode is not selected.
  2. Check all the options for creating the virtual disk, and then click Finish.
  3. In the Virtual Machine Properties dialog, click OK. The Reconfigure virtual machine task will launch.

Troubleshooting Tips for Supervisor Installations

Check the  Supervisor System and Directory Level Permissions Check Backend System Health

Check the  Supervisor System and Directory Level Permissions

Use SSH to connect to the Supervisor and check that the cmdb, data, query, querywkr, and svn permissions match those in this table:

 

[root@super ~]# ls -l / dr-xr-xr-x.   2 root     root      4096 Oct 15 11:09 bin dr-xr-xr-x.   5 root     root      1024 Oct 15 14:50 boot drwxr-xr-x    4 postgres postgres  4096 Nov 10 18:59 cmdb drwxr-xr-x    9 admin    admin     4096 Nov 11 11:32 data drwxr-xr-x   15 root     root      3560 Nov 10 11:11 dev -rw-r–r–    1 root     root        34 Nov 11 12:09 dump.rdb drwxr-xr-x.  93 root     root     12288 Nov 11 12:12 etc drwxr-xr-x.   4 root     root      4096 Nov 10 11:08 home dr-xr-xr-x.  11 root     root      4096 Oct 15 11:13 lib dr-xr-xr-x.   9 root     root     12288 Nov 10 19:13 lib64 drwx——.   2 root     root     16384 Oct 15 14:46 lost+found drwxr-xr-x.   2 root     root      4096 Sep 23  2011 media drwxr-xr-x.   2 root     root      4096 Sep 23  2011 mnt drwxr-xr-x.  10 root     root      4096 Nov 10 09:37 opt drwxr-xr-x    2 root     root      4096 Nov 10 11:10 pbin dr-xr-xr-x  289 root     root         0 Nov 10 11:13 proc drwxr-xr-x    8 admin    admin     4096 Nov 11 00:37 query drwxr-xr-x    8 admin    admin     4096 Nov 10 18:58 querywkr dr-xr-x—.   7 root     root      4096 Nov 10 19:13 root dr-xr-xr-x.   2 root     root     12288 Oct 15 11:08 sbin drwxr-xr-x.   2 root     root      4096 Oct 15 14:47 selinux drwxr-xr-x.   2 root     root      4096 Sep 23  2011 srv drwxr-xr-x    4 apache   apache    4096 Nov 10 18:58 svn drwxr-xr-x   13 root     root         0 Nov 10 11:13 sys drwxrwxrwt.   9 root     root      4096 Nov 11 12:12 tmp drwxr-xr-x.  15 root     root      4096 Oct 15 14:58 usr drwxr-xr-x.  21 root     root      4096 Oct 15 11:01 var

 

Check that the /data , /cmdb, and /svn directory level permissions match those in this table:

 

[root@super ~]# ls -l /data drwxr-xr-x 3 root     root     4096 Nov 11 02:52 archive drwxr-xr-x 3 admin    admin    4096 Nov 11 12:01 cache drwxr-xr-x 2 postgres postgres 4096 Nov 10 18:46 cmdb drwxr-xr-x 2 admin    admin    4096 Nov 10 19:04 custParser drwxr-xr-x 5 admin    admin    4096 Nov 11 00:29 eventdb drwxr-xr-x 2 admin    admin    4096 Nov 10 19:04 jmxXml drwxr-xr-x 2 admin    admin    4096 Nov 11 11:33 mibXml

[root@super ~]# ls -l /cmdb drwx—— 14 postgres postgres  4096 Nov 10 11:08 data

[root@super ~]# ls -l /svn drwxr-xr-x 6 apache apache  4096 Nov 10 18:58 repos

 

Check Backend System Health

Use SSH to connect to the supervisor and run phstatus to see if the system status metrics match those in this table:

 

 

[root@super ~]# phstatus

Every 1.0s: /opt/phoenix/bin/phstatus.py

System uptime:  12:37:58 up 17:24,  1 user,  load average: 0.06, 0.01, 0.00

Tasks: 20 total, 0 running, 20 sleeping, 0 stopped, 0 zombie

Cpu(s): 8 cores, 0.6%us, 0.7%sy, 0.0%ni, 98.6%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st

Mem: 16333720k total, 5466488k used, 10867232k free, 139660k buffers

Swap: 6291448k total, 0k used, 6291448k free, 1528488k cached

PROCESS                  UPTIME         CPU%           VIRT_MEM       RES_MEM phParser                 12:00:34    0              1788m          280m phQueryMaster            12:00:34    0              944m           63m phRuleMaster             12:00:34    0              596m           85m phRuleWorker             12:00:34    0              1256m          252m phQueryWorker            12:00:34    0              1273m          246m phDataManager            12:00:34    0              1505m          303m phDiscover               12:00:34    0              383m           32m phReportWorker           12:00:34    0              1322m          88m phReportMaster           12:00:34    0              435m           38m phIpIdentityWorker       12:00:34    0              907m           47m phIpIdentityMaster       12:00:34    0              373m           26m phAgentManager           12:00:34    0              881m           200m phCheckpoint             12:00:34    0              98m            23m phPerfMonitor            12:00:34    0              700m           40m phReportLoader           12:00:34    0              630m           233m phMonitor                31:21       0              1120m          25m Apache                   17:23:23    0              260m           11m

Node.js                  17:20:54    0              656m           35m

AppSvr                   17:23:16    0              8183m          1344m

DBSvr                    17:23:34    0              448m           17m

 

 

Configuring the Supervisor, Worker, or Collector from the VM Console
  1. In the VMware vSphere client, select the Supervisor, Worker, or Collector virtual appliance 2. Right-click to open the Virtual Appliance Options menu, and then select Power > Power On.
  2. In the Virtual Appliance Options menu, select Open Console.
  3. In VM console, select Set Timezone and then press Enter.
  4. Select your Location, and then press Enter.
  5. Select your Country, and then press Enter.
  6. Select your Timezone, and then press Enter.
  7. Review your Timezone information, select 1, and then press Enter.
  8. When the Configuration screen reloads, select Login, and then press Enter.
  9. Enter the default login credentials.
Login root
Password ProspectHills
  1. Run the vami_config_net script to configure the network.

 

  1. When prompted, enter the the information for these network components to configure the Static IP address: IP Address, Netmask, Gate way, DNS Server(s).
  2. Enter the Host name, and then press Enter.
  3. For the Supervisor, set either the Local or NFS storage mount point.

For a Worker, use the same IP address of the NFS server you set for the Supervisor.

Supervisor Local storage /dev/sdd
NFS storage <NFS_Server_IP_Address>:/<Directory_Path>

 

After you set the mount point, the Supervisor will automatically reboot, and in 15 to 25 minutes the Supervisor will be successfully configured.

ISO Installation

These topics cover installation of FortiSIEM from an ISO under a native file system such as Linux, also known as installing “on bare metal.”  Installing a Collector on Bare Metal Hardware


Viewing all articles
Browse latest Browse all 2380

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>