Quantcast
Channel: Fortinet GURU
Viewing all 2380 articles
Browse latest View live

FIM-7901E interface module

$
0
0

FIM-7901E interface module

The FIM-7901E interface module is a hot swappable module that provides data, management and session sync/heartbeat interfaces, base backplane switching and fabric backplane session-aware load balancing for a FortiGate-7000 chassis. The FIM-7901E includes an integrated switch fabric and DP2 processors to load balance millions of data sessions over the chassis fabric backplane to FPM processor modules.

The FIM-7901E can be installed in any FortiGate-7000 series chassis in hub/switch slots 1 and 2. The FIM-7901E provides thirty-two 10GigE small form-factor pluggable plus (SPF+) interfaces for a FortiGate-7000 chassis.

You can also install FIM-7901Es in a second chassis and operate the chassis in HA mode with another set of processor modules to provide chassis failover protection.

FIM-7901E front panel

The FIM-7901E includes the following hardware features:

  • Thirty-two front panel 10GigE SFP+ fabric channel interfaces (A1 to A32). These interfaces are connected to 10Gbps networks to distribute sessions to the FPM processor modules installed in chassis slots 3 and up. These interfaces can also be configured to operate as Gigabit Ethernet interfaces using SFP transceivers. These interfaces also support creating link aggregation groups (LAGs) that can include interfaces from both FIM-7901Es. l Two front panel 10GigE SFP+ interfaces (M1 and M2) that connect to the base backplane channel. These interfaces are used for heartbeat, session sync, and management communication between FIM-7901Es in different chassis. These interfaces can also be configured to operate as Gigabit Ethernet interfaces using SFP transceivers, but should not normally be changed. If you use switches to connect these interfaces, the switch ports should be able to accept packets with a maximum frame size of at least 1526. The M1 and M2 interfaces need to be on different broadcast domains. If M1 and M2 are connected to the same switch, Q-in-Q must be enabled on the switch. l Four 10/100/1000BASE-T out of band management Ethernet interfaces (MGMT1 to MGMT4).
  • One 80Gbps fabric backplane channel for traffic distribution with each FPM module installed in the same chassis as the FIM-7901E.
  • One 1Gbps base backplane channel for base backplane with each FPM module installed in the same chassis as the FIM-7901E.
  • One 40Gbps fabric backplane channel for fabric backplane communication with the other FIM-7901E in the chassis.

FIM-7901E

  • One 1Gbps base backplane channel for base backplane communication with the other FIM-7901E in the chassis. l On-board DP2 processors and an integrated switch fabric to provide high-capacity session-aware load balancing. l One front panel USB port. l Power button. l NMI switch (for troubleshooting as recommended by Fortinet Support). l Mounting hardware. l LED status indicators.

FIM-7901E schematic

The FIM-7901E includes an integrated switch fabric (ISF) that connects the front panel interfaces to the DP2 session-aware load balancers and to the chassis backplanes. The ISF also allows the DP2 processors to distribute sessions amoung all NP6 processors on the FPM modules in the same chassis.

FIM-7901E schematic


FIM-7904E interface module

$
0
0

FIM-7904E interface module

The FIM-7904E interface module is a hot swappable module that provides data, management and session sync/heartbeat interfaces, base backplane switching and fabric backplane session-aware load balancing for a FortiGate-7000 series chassis. The FIM-7904E includes an integrated switch fabric and DP2 processors to load balance millions of data sessions over the chassis fabric backplane to FPM processor modules.

The FIM-7904E can be installed in any FortiGate-7000 series chassis in hub/switch slots 1 and 2. The FIM-7904E provides four Quad Small Form-factor Pluggable plus (QSFP+) interfaces for a FortiGate-7000 chassis. Using a

40GBASE-SR10 multimode QSFP+ transceiver, each QSFP+ interface can also be split into four 10GBASE-SR interfaces.

You can also install FIM-7904Es in a second chassis and operate the chassis in HA mode with another set of processor modules to provide chassis failover protection.

FIM-7904E front panel

The FIM-7904E includes the following hardware features:

  • Eight front panel 40GigE QSFP+ fabric channel interfaces (B1 to B8). These interfaces are connected to 40Gbps networks to distribute sessions to the FPM processor modules installed in chassis slots 3 and up. Using 40GBASESR10 multimode QSFP+ transceivers, each QSFP+ interface can also be split into four 10GBASE-SR interfaces. These interfaces also support creating link aggregation groups (LAGs) that can include interfaces from both FIM7904Es. l Two front panel 10GigE SFP+ interfaces (M1 and M2) that connect to the base backplane channel. These interfaces are used for heartbeat, session sync, and management communication between FIM-7904Es in different chassis. These interfaces can also be configured to operate as Gigabit Ethernet interfaces using SFP transceivers, but should not normally be changed. If you use switches to connect these interfaces, the switch ports should be able to accept packets with a maximum frame size of at least 1526. The M1 and M2 interfaces need to be on different broadcast domains. If M1 and M2 are connected to the same switch, Q-in-Q must be enabled on the switch. l Four 10/100/10000BASE-T out of band management Ethernet interfaces (MGMT1 to MGMT4).
  • One 80Gbps fabric backplane channel for traffic distribution with each FPM module installed in the same chassis as the FIM-7904E.

 

  • One 1Gbps base backplane channel for base backplane with each FPM module installed in the same chassis as the FIM-7904E.
  • One 40Gbps fabric backplane channel for fabric backplane communication with the other FIM-7904E in the chassis. l One 1Gbps base backplane channel for base backplane communication with the other FIM-7904E in the chassis. l On-board DP2 processors and an integrated switch fabric to provide high-capacity session-aware load balancing. l One front panel USB port. l Power button. l NMI switch (for troubleshooting as recommended by Fortinet Support). l Mounting hardware. l LED status indicators.

Splitting the FIM-7904E B1 to B8 interfaces

Each 40GE interface (B1 to B8) on the FIM-7904Es in slot 1 and slot 2 of a FortiGate-7000 system can be split into 4x10GBE interfaces. You split these interfaces after the FIM-7904Es are installed in your FortiGate-7000 system and the system us up and running. You can split the interfaces of the FIM-7904Es in slot 1 and slot 2 at the same time by entering a single CLI command. Splitting the interfaces requires a system reboot so Fortinet recommends that you split multiple interfaces at the same time according to your requirements to avoid traffic disruption.

For example, to split the B1 interface of the FIM-7904E in slot 1 (this interface is named 1-B1) and the B1 and B4 interfaces of the FIM-7904E in slot 2 (these interfaces are named 2-B1 and 2-B4) connect to the CLI of your FortiGate-7000 system using the management IP and enter the following command:

config system global set split-port 1-B1 2-B1 2-B4

end

After you enter the command, the FortiGate-7000 reboots and when it comes up:

l The 1-B1 interface will no longer be available. Instead the 1-B1/1, 1-B1/2, 1-B1/3, and 1-B1/4 interfaces will be available. l The 2-B1 interface will no longer be available. Instead the 2-B1/1, 2-B1/2, 2-B1/3, and 2-B1/4 interfaces will be available. l The 2-B4 interface will no longer be available. Instead the 2-B4/1, 2-B4/2, 2-B4/3, and 2-B4/4 interfaces will be available.

You can now connect breakout cables to these interfaces and configure traffic between them just like any other FortiGate interface.

FIM-7904E hardware schematic

The FIM-7904E includes an integrated switch fabric (ISF) that connects the front panel interfaces to the DP2 session-aware load balancers and to the chassis backplanes. The ISF also allows the DP2 processors to distribute sessions amoung all NP6 processors on the FPM modules in the same chassis.

FIM-7910E

FIM-7904E hardware architecture

FIM-7910E interface module

The FIM-7910E interface module is a hot swappable module that provides data, management and session sync/heartbeat interfaces, base backplane switching and fabric backplane session-aware load balancing for a FortiGate-7000 series chassis. The FIM-7910E includes an integrated switch fabric and DP2 processors to load balance millions of data sessions over the chassis fabric backplane to FPM processor modules.

The FIM-7910E can be installed in any FortiGate-7000 series chassis in hub/switch slots 1 and 2. The FIM-7910E provides four C form-factor pluggable 2 (CFP2) interfaces for a FortiGate-7000 chassis. Using a 100GBASESR10 multimode CFP2 transceiver, each CFP2 interface can also be split into ten 10GBASE-SR interfaces.

FIM-7910E front panel

FIM-7910E

The FIM-7910E includes the following hardware features:

  • Four front panel 100GigE CFP2 fabric channel interfaces (C1 to C4). These interfaces are connected to 100Gbps networks to distribute sessions to the FPM processor modules installed in chassis slots 3 and up. Using 100GBASESR10 multimode CFP2 transceivers, each CFP2 interface can also be split into ten 10GBASE-SR interfaces. These interfaces also support creating link aggregation groups (LAGs) that can include interfaces from both FIM-7910Es. l Two front panel 10GigE SFP+ interfaces (M1 and M2) that connect to the base backplane channel. These interfaces are used for heartbeat, session sync, and management communication between FIM-7910Es in different chassis. These interfaces can also be configured to operate as Gigabit Ethernet interfaces using SFP transceivers, but should not normally be changed. If you use switches to connect these interfaces, the switch ports should be able to accept packets with a maximum frame size of at least 1526. The M1 and M2 interfaces need to be on different broadcast domains. If M1 and M2 are connected to the same switch, Q-in-Q must be enabled on the switch. l Four 10/100/1000BASE-T out of band management Ethernet interfaces (MGMT1 to MGMT4).
  • One 80Gbps fabric backplane channel for traffic distribution with each FPM module installed in the same chassis as the FIM-7910E.
  • One 1Gbps base backplane channel for base backplane with each FPM module installed in the same chassis as the FIM-7910E.
  • One 40Gbps fabric backplane channel for fabric backplane communication with the other FIM-7910E in the chassis. l One 1Gbps base backplane channel for base backplane communication with the other FIM-7910E in the chassis. l On-board DP2 processors and an integrated switch fabric to provide high-capacity session-aware load balancing. l One front panel USB port. l Power button. l NMI switch (for troubleshooting as recommended by Fortinet Support). l Mounting hardware. l LED status indicators.

Splitting the FIM-7910E C1 to C4 interfaces

Each 100GE interface (C1 to C4) on the FIM-7910Es in slot 1 and slot 2 of a FortiGate-7000 system can be split into 10 x 10GBE interfaces. You split these interfaces after the FIM-7910Es are installed in your FortiGate-7000 system and the system us up and running. You can split the interfaces of the FIM-7910Es in slot 1 and slot 2 at the same time by entering a single CLI command. Splitting the interfaces requires a system reboot so Fortinet recommends that you split multiple interfaces at the same time according to your requirements to avoid traffic disruption.

For example, to split the C1 interface of the FIM-7910E in slot 1 (this interface is named 1-C1) and the C1 and C4 interfaces of the FIM-7910E in slot 2 (these interfaces are named 2-C1 and 2-C4) connect to the CLI of your FortiGate-7000 system using the management IP and enter the following command:

config system global set split-port 1-C1 2-C1 2-C4

end

After you enter the command, the FortiGate-7000 reboots and when it comes up:

  • The 1-C1 interface will no longer be available. Instead the 1-C1/1, 1-C1/2, …, and 1-C1/10 interfaces will be available. l The 2-C1 interface will no longer be available. Instead the 2-C1/1, 2-C1/2, …, and 2-C1/10 interfaces will be available.

FIM-7920E

  • The 2-C4 interface will no longer be available. Instead the 2-C4/1, 2-C4/2, …, and 2-C4/10 interfaces will be available.

You can now connect breakout cables to these interfaces and configure traffic between them just like any other FortiGate interface.

FIM-7910E hardware schematic

The FIM-7910E includes an integrated switch fabric (ISF) that connects the front panel interfaces to the DP2 session-aware load balancers and to the chassis backplanes. The ISF also allows the DP2 processors to distribute sessions amoung all NP6 processors on the FPM modules in the same chassis.

FIM-7910E hardware schematic

FIM-7920E interface module

The FIM-7920E interface module is a hot swappable module that provides data, management and session sync/heartbeat interfaces, base backplane switching and fabric backplane session-aware load balancing for a FortiGate-7000 series chassis. The FIM-7920E includes an integrated switch fabric and DP2 processors to load balance millions of data sessions over the chassis fabric backplane to FPM processor modules.

The FIM-7920E can be installed in any FortiGate-7000 series chassis in hub/switch slots 1 or 2. The FIM-7920E provides four Quad Small Form-factor Pluggable 28 (QSFP28) 100GigE interfaces for a FortiGate-7000 chassis. Using a 100GBASE-SR4 QSFP28 or 40GBASE-SR4 QSFP+ transceiver, each QSFP28 interface can also be split into four 10GBASE-SR interfaces.

You can also install FIM-7920Es in a second chassis and operate the chassis in HA mode with another set of processor modules to provide chassis failover protection.

FIM-7920E

FIM-7920E front panel

The FIM-7920E includes the following hardware features:

  • Four front panel 100GigE QSFP28 fabric channel interfaces (C1 to C4). These interfaces are connected to

100Gbps networks to distribute sessions to the FPM processor modules installed in chassis slots 3 and up. Using a

100GBASE-SR4 QSFP28 or 40GBASE-SR4 QSFP+ transceiver, each QSFP28 interface can also be split into four 10GBASE-SR interfaces. These interfaces also support creating link aggregation groups (LAGs) that can include interfaces from both FIM-7920Es.

  • Two front panel 10GigE SFP+ interfaces (M1 and M2) that connect to the base backplane channel. These interfaces are used for heartbeat, session sync, and management communication between FIM-7920Es in different chassis. These interfaces can also be configured to operate as Gigabit Ethernet interfaces using SFP transceivers, but should not normally be changed. If you use switches to connect these interfaces, the switch ports should be able to accept packets with a maximum frame size of at least 1526. The M1 and M2 interfaces need to be on different broadcast domains. If M1 and M2 are connected to the same switch, Q-in-Q must be enabled on the switch. l Four 10/100/1000BASE-T out of band management Ethernet interfaces (MGMT1 to MGMT4).
  • One 80Gbps fabric backplane channel for traffic distribution with each FPM module installed in the same chassis as the FIM-7920E.
  • One 1Gbps base backplane channel for base backplane with each FPM module installed in the same chassis as the FIM-7920E.
  • One 40Gbps fabric backplane channel for fabric backplane communication with the other FIM-7920E in the chassis. l One 1Gbps base backplane channel for base backplane communication with the other FIM-7920E in the chassis. l On-board DP2 processors and an integrated switch fabric to provide high-capacity session-aware load balancing. l One front panel USB port. l Power button. l NMI switch (for troubleshooting as recommended by Fortinet Support). l Mounting hardware. l LED status indicators.

Changing the interface type and splitting the FIM-7920E C1 to C4 interfaces

By default, the FIM-7920E C1 to C4 interfaces are configured as 100GE QSFP28 interfaces. You can use the following command to convert them to 40GE QSFP+ interfaces. Once converted, you can use the other command below to split them into four 10GBASE-SR interfaces.

 

FIM-7920E hardware schematic

Changing the interface type

For example, to change the interface type of the C1 interface of the FIM-7920E in slot 1 to 40GE QSFP+ connect to the CLI of your FortiGate-7000 system using the management IP and enter the following command:

config system global set qsfp28-40g-port 1-C1

end

The FortiGate-7000 system reboots and when it starts up interface C1 of the FIM-7920E in slot 1 is operating as a 40GE QSFP+ interface .

To change the interface type of the C3 and C4 ports of the FIM-7920E in slot 2 to 40GE QSFP+ enter the following command:

config system global set qsfp28-40g-port 2-C3 2-C4

end

The FortiGate-7000 system reboots and when it starts up interfaces C3 and C4 of the FIM-7920E in slot 2 are operating as a 40GE QSFP+ interfaces.

Splitting the C1 to C4 interfaces

Each 40GE interface (C1 to C4) on the FIM-7920Es in slot 1 and slot 2 of a FortiGate-7000 system can be split into 4 x 10GBE interfaces. You split these interfaces after the FIM-7920Es are installed in your FortiGate-7000 system and the system us up and running. You can split the interfaces of the FIM-7920Es in slot 1 and slot 2 at the same time by entering a single CLI command. Splitting the interfaces requires a system reboot so Fortinet recommends that you split multiple interfaces at the same time according to your requirements to avoid traffic disruption.

For example, to split the C1 interface of the FIM-7920E in slot 1 (this interface is named 1-C1) and the C1 and C4 interfaces of the FIM-7920E in slot 2 (these interfaces are named 2-C1 and 2-C4) connect to the CLI of your FortiGate-7000 system using the management IP and enter the following command:

config system global set split-port 1-C1 2-C1 2-C4

end

After you enter the command, the FortiGate-7000 reboots and when it comes up:

l The 1-C1 interface will no longer be available. Instead the 1-C1/1, 1-C1/2, 1-C1/3, and 1-C1/4 interfaces will be available. l The 2-C1 interface will no longer be available. Instead the 2-C1/1, 2-C1/2, 2-C1/3, and 2-C1/4 interfaces will be available. l The 2-C4 interface will no longer be available. Instead the 2-C4/1, 2-C4/2, 2-C4/3, and 2-C4/4 interfaces will be available.

You can now connect breakout cables to these interfaces and configure traffic between them just like any other FortiGate interface.

FIM-7920E hardware schematic

The FIM-7920E includes an integrated switch fabric (ISF) that connects the front panel interfaces to the DP2 session-aware load balancers and to the chassis backplanes. The ISF also allows the DP2 processors to FPM-7620E processing module

distribute sessions among all NP6 processors on the FPM modules in the same chassis.

FIM-7920E hardware schematic

FPM-7620E processing module

The FPM-7620E processing module is a high-performance worker module that processes sessions load balanced to it by FortiGate-7000 series interface (FIM) modules over the chassis fabric backplane. The FPM-7620E can be installed in any FortiGate-7000 series chassis in slots 3 and up.

The FPM-7620E includes two 80Gbps connections to the chassis fabric backplane and two 1Gbps connections to the base backplane. The FPM-7620E processes sessions using a dual CPU configuration, accelerates network traffic processing with 4 NP6 processors and accelerates content processing with 8 CP9 processors. The NP6 network processors are connected by the FIM switch fabric so all supported traffic types can be fast path accelerated by the NP6 processors.

The FPM-7620E includes the following hardware features:

l Two 80Gbps fabric backplane channels for load balanced sessions from the FIM modules installed in the chassis. l Two 1Gbps base backplane channels for management, heartbeat and session sync communication. l Dual CPUs for high performance operation. l Four NP6 processors to offload network processing from the CPUs. l Eight CP9 processors to offload content processing and SSL and IPsec encryption from the CPUs.

FPM-7620E processing

FPM-7620E front panel

  • Power button.
  • NMI switch (for troubleshooting as recommended by Fortinet Support). l Mounting hardware. l LED status indicators.

NP6 network processors – offloading load balancing and network traffic

The four FPM-7620E NP6 network processors combined with the FIM module integrated switch fabric (ISF) provide hardware acceleration by offloading load balancing from the FPM-7620E CPUs. The result is enhanced network performance provided by the NP6 processors plus the network processing load is removed from the CPU. The NP6 processor can also handle some CPU intensive tasks, like IPsec VPN encryption/decryption. Because of the integrated switch fabric, all sessions are fast-pathed and accelerated.

FPM-7620E processing module FPM-7620E hardware architecture

 

Accelerated IPS, SSL VPN, and IPsec VPN (CP9 content processors)

Accelerated IPS, SSL VPN, and IPsec VPN (CP9 content processors)

The FPM-7620E includes eight CP9 processors that provide the following performance enhancements:

  • Flow-based inspection (IPS, application control etc.) pattern matching acceleration with over 10Gbps throughput l IPS pre-scan l IPS signature correlation l Full match processors
  • High performance VPN bulk data engine l IPsec and SSL/TLS protocol processor l DES/3DES/AES128/192/256 in accordance with FIPS46-3/FIPS81/FIPS197 l MD5/SHA-1/SHA256/384/512-96/128/192/256 with RFC1321 and FIPS180 l HMAC in accordance with RFC2104/2403/2404 and FIPS198 l ESN mode
  • GCM support for NSA “Suite B” (RFC6379/RFC6460) including GCM-128/256; GMAC-128/256
  • Key Exchange Processor that supports high performance IKE and RSA computation l Public key exponentiation engine with hardware CRT support l Primary checking for RSA key generation l Handshake accelerator with automatic key material generation l True Random Number generator l Elliptic Curve support for NSA “Suite B” l Sub public key engine (PKCE) to support up to 4096 bit operation directly (4k for DH and 8k for RSA with CRT)
  • DLP fingerprint support l TTTD (Two-Thresholds-Two-Divisors) content chunking l Two thresholds and two divisors are configurable Accelerated IPS, SSL VPN, and IPsec VPN (CP9 content processors)

Getting started with FortiGate-7000

$
0
0

Getting started with FortiGate-7000

Once you have installed your FortiGate-7000 chassis in a rack and installed FIM interface modules and FPM processing modules in it you can power on the chassis and all modules in the chassis will power up.

Whenever a chassis is first powered on, it takes about 5 minutes for all modules to start up and become completely initialized and synchronized. During this time the chassis will not allow traffic to pass through and you may not be able to log into the GUI, or if you manage to log in the session could time out as the FortiGate-7000 continues negotiating.

Review the chassis and module front panel LEDs to verify that everything is operating normally. Wait until the chassis has complete started up and synchronized before making configuration changes. You can use the diagnose system ha status command to confirm that the FortiGate-7000 is completely initialized. If the output from entering this command hasn’t changed after checking for a few minutes you can assume that the system has initialized. You don’t normally have to confirm that the system has initialized, but this diagnose command is available if needed.

You can configure and manage the FortiGate-7000 by connecting an Ethernet cable to one of the MGMT1 to MGMT4 interfaces of one of the FIM interface modules in the chassis. By default the MGMT1 to MGMT4 interfaces of both interface modules have been added to a static 802.3 aggregate interface called mgmt with a default IP address of 192.168.1.99.

You can connect to any of the MGMT1 to MGMT4 interfaces to create a management connection to the FortiGate-7000. You can also set up a switch with a static 802.3 aggregate interface and connect the switch ports in the aggregate interface to multiple MGMT1 to MGMT4 interfaces to set up redundant management connections to the FortiGate-7000.

Connect to the GUI by browsing to https://192.168.1.99. Log into the GUI using the admin account with no password. Connect to the CLI by using SSH to connect to 192.168.1.99. You may have to enable SSH administrative access for the mgmt interface before you can connect to the CLI.

Once you have logged into the GUI or CLI you can view and change the configuration of your FortiGate-7000 just like any FortiGate. For example, all of the interfaces from both interface modules are visible and you can configure firewall policies between any two interfaces, even if they are physically in different interface modules. You can also configure aggregate interfaces that include physical interfaces from both interface modules.

The following example Unit Operation dashboard widget shows a FortiGate-7040E with FIM-7901E modules in slots 1 and 2 and FPM modules in slots 3 and 4.

Managing individual modules

Example FortiGate-7040 unit operation widget view

Managing individual modules

When you log into the GUI or CLI using the mgmt interface IP address you are actually connected to the primary (or master) interface module in slot 1 (the address of slot 1 is FIM01). To verify which module you have logged into, the GUI header banner or CLI prompt shows the hostname of the module you are logged into plus the slot address in the format <hostname> (<slot address>).

In some cases you may want to connect to individual modules. For example, you may want to view the traffic being processed by a specific processor module. You can connect to the GUI or CLI of individual modules in the chassis using the system management IP address with a special port number.

For example, if the system management IP address is 192.168.1.99 you can connect to the GUI of the interface module in slot 1 using the system management IP address (for example, by browsing to https://192.168.1.99). You can also use the system management IP address followed by the special port number, for example https://192.168.1.99:44301.

The special port number (in this case 44301) is a combination of the service port (for HTTPS the service port is 443) and the chassis slot number (in this example, 01). The following table lists the special ports to use to connect to each chassis slot using common admin protocols:

Connecting to module CLIs using the management module FortiGate-7000 special administration port numbers

Slot Number Slot Address HTTP

(80)

HTTPS (443) Telnet

(23)

SSH (22) SNMP (161)
5 Processor module FPM05 8005 44305 2305 2205 16105
3 Processor module FPM03 8003 44303 2303 2203 16103
1 Primary Interface module FIM01 8001 44301 2301 2201 16101
2 Interface module FIM02 8002 44302 2302 2202 16102
4 Processor module FPM04 8004 44304 2304 2204 16104
6 Processor module FPM06 8006 44306 2306 2206 16106

For example:

l To connect to the GUI of the interface module in slot 3 using HTTPS you would browse to https://192.168.1.99:44303. l To send an SNMP query to the processor module in slot 6 use the port number 16106.

The FortiGate-7000 configuration is the same no matter which modem you log into. Logging into different modules allows you to use FortiView or Monitor GUI pages to view the activity on that module. Even though you can log into different modules, you should only make configuration changes from the primary interface module; which is the FIM module in slot 1.

Managing individual modules from the CLI

From the CLI you can use the following command to switch between chassis slots and perform different operations on the modules in each slot:

execute load-balance slot {manage | power-off | power-on | reboot} <slot-number> Use manage to connect to the CLI of a different module, use power-off, power-on, and reboot to turn off or turn on the power or reboot the module in <slot-number>.

Connecting to module CLIs using the management module

All FortiGate-7000 chassis includes a front panel management module (also called a shelf manager) on the chassis front panel. See the system guide for your chassis for details about the management module.

Connecting to module CLIs using the management module

ForiGate-7040E management module front panel

The management module includes two console ports named Console 1 and Console 2 that can be used to connect to the CLI of the FIM and FPM modules in the chassis. As described in the system guide, the console ports are also used to connect to SMC CLIs of the management module and the FIM and FPM modules

By default when the chassis first starts up Console 1 is connected to the FortiOS CLI of the FIM module in slot 1 and Console 2 is disconnected. The default settings for connecting to each console port are:

Baud Rate (bps) 9600, Data bits 8, Parity None, Stop bits 1, and Flow Control None.

You can use the console connection change buttons to select the CLI that each console port is connected to. Press the button to cycle through the FIM and FPM module FortiOS CLIs and disconnect this console. The console’s LEDs indicate what it is connected to. If no LED is lit the console is either connected to the management module SMC SDI console or disconnected. Both console ports cannot be connected to the same CLI at the same time. If a console button press would cause a conflict that module is skipped. If one of the console ports is disconnected then the other console port can connect to any CLI.

If you connect a PC to one of the management module console ports with a serial cable and open a terminal session you begin by pressing Ctrl-T to enable console switching mode. Press Ctrl-T multiple times to cycle through the FIM and FPM module FortiOS CLIs (the new destination is displayed in the terminal window). If you press Ctrl-T after connecting to the FPM module in the highest slot number, the console is disconnected. Press Ctrl-T again to start over again at slot 1.

Once the console port is connected to the CLI that you want to use, press Enter to enable the CLI and login. The default administrator account for accessing the FortiOS CLIs is admin with no password.

When your session is complete you can press Ctrl-T until the prompt shows you have disconnected from the console.

Connecting to the FortiOS CLI of the FIM module in slot 1

Use the following steps to connect to the FortiOS CLI of the FIM module in slot 1:

  1. Connect the console cable supplied with your chassis to Console 1 and to your PC or other device RS-232 console port.
  2. Start a terminal emulation program on the management computer. Use these settings: Baud Rate (bps) 9600, Data bits 8, Parity None, Stop bits 1, and Flow Control None.

Default VDOM configuration

  1. Press Ctrl-T to enter console switch mode.
  2. Repeat pressing Ctrl-T until you have connected to slot 1. Example prompt:

<Switching to Console: FIM01 (9600)>

  1. Login with an administrator name and password.

The default is admin with no password. For security reasons, it is strongly recommended that you change the password.

  1. When your session is complete, enter the exit command to log out.

Default VDOM configuration

By default when the FortiGate-7000 first starts up it is operating in multiple VDOM mode. The system has a management VDOM (named dmgmt-vdom) and the root VDOM. All management interfaces are in dmgmt-vdom and all other interfaces are in the root VDOM. You can add more VDOMs and add interfaces to them or just use the root VDOM.

Default management VDOM

By default the FortiGate-7000 configuration includes a management VDOM named dmgmt-vdom. For the

FortiGate-7000 system to operate normally you should not change the configuration of this VDOM and this VDOM should always be the management VDOM. You should also not add or remove interfaces from this VDOM.

You have full control over the configurations of other FortiGate-7000 VDOMs.

Firmware upgrades

All of the modules in your FortiGate-7000 run the same firmware image. You upgrade the firmware from the primary interface module GUI or CLI just as you would any FortiGate product. During the upgrade process the firmware of all of the modules in the chassis upgrades in one step. Firmware upgrades should be done during a quiet time because traffic will briefly be interrupted during the upgrade process.

If you are operating two FortiGate-7000 chassis in HA mode with uninterruptable-upgrade and session-pickup enabled, firmware upgrades should only cause a minimal traffic interruption. Use the following command to enable these settings. These settings are synchronized to all modules in the cluster.

config system ha set uninterruptable-upgrade enable set session-pickup enable

end

Restarting the FortiGate-7000

To restart all of the modules in a FortiGate-7000 chassis, connect to the primary FIM module CLI and enter the command execute reboot. When you enter this command all of the modules in the chassis reboot.

Load balancing

You can restart individual modules by logging into that module’s CLI and entering the execute reboot command.

Load balancing

FortiGate-7000E session-aware load balancing (SLBC) distributes TCP, UDP, and SCTP traffic from the interface modules to the processor modules. Traffic is load balanced based on the algorithm set by the following command:

config load-balance setting set dp-load-distribution-method {round-robin | src-ip | dst-ip | src-dst-ip | src-ipsport | dst-ip-dport | src-dst-ip-sport-dport}

end Where:

round-robin Directs new requests to the next slot regardless of response time or number of connections. src-ip traffic load is distributed across all slots according to source IP address. dst-ip traffic load is statically distributed across all slots according to destination IP address. src-dst-ip traffic load is distributed across all slots according to the source and destination IP addresses. src-ip-sport traffic load is distributed across all slots according to the source IP address and source port.

dst-ip-dport traffic load is distributed across all slots according to the destination IP address and destination port.

src-dst-ipsport-dport traffic load is distributed across all slots according to the source and destination IP address, source port, and destination port. This is the default load balance distribution method and represents true session-aware load balancing.

Traffic that cannot be load balanced

Some traffic types cannot be load balanced. Traffic that cannot be load balanced is all processed by the primary FPM module, which is usually the FPM module in slot 3. Internal to the system this FPM module is designated as the ELBC master. If the FPM module in slot 3 fails or is rebooted, the next FPM module will become the primary FPM module.

You can configure the FortiGate-7000 to send any type of traffic to the primary FPM or to other specific FPM modules using the config loadbalance flow-rule command. By default, traffic that is only sent to the primary FPM module includes, IPsec, IKE, GRE, session helper, Kerberos, BGP, RIP, IPv4 and IPv6 DHCP, PPTP, BFD, IPv4 multicast and IPv6 multicast. You can view the default configuration of the config loadbalance flow-rule command to see how this is all configured. For example, the following configuration sends all IKE traffic to the primary FPM:

config load-balance flow-rule edit 1 set status enable set vlan 0 set ether-type ip set protocol udp set src-l4port 500-500 set dst-l4port 500-500 set action forward

Recommended configuration for traffic that cannot be load balanced

set forward-slot master set priority 5 set comment “ike”

next edit 2 set status disable set vlan 0 set ether-type ip set protocol udp set src-l4port 4500-4500 set dst-l4port 0-0 set action forward set forward-slot master set priority 5 set comment “ike-natt src”

next edit 3 set status disable set vlan 0 set ether-type ip set protocol udp set src-l4port 0-0 set dst-l4port 4500-4500 set action forward set forward-slot master set priority 5 set comment “ike-natt dst”

next

Recommended configuration for traffic that cannot be load balanced

The following flow rules are recommended to handle common forms of traffic that cannot be load balanced. These flow rules send GPRS (port 2123), SSL VPN, IPv4 and IPv6 IPsec VPN, ICMP and ICMPv6 traffic to the primary (or master) FPM.

The CLI syntax below just shows the configuration changes. All other options are set to their defaults. For example, the flow rule option that controls the FPM slot that sessions are sent to is forward-slot and in all cases below forward-slot is set to its default setting of master. This setting sends matching sessions to the primary (or master) FPM.

config load-balance flow-rule edit 20 set status enable set ether-type ipv4 set protocol udp set dst-l4port 2123-2123

next edit 21 set status enable set ether-type ip set protocol tcp set dst-l4port 10443-10443 set comment “ssl vpn to the primary FPM”

next edit 22

Recommended configuration for traffic that cannot be load balanced

set status enable set ether-type ipv4 set protocol udp set src-l4port 500-500 set dst-l4port 500-500 set comment “ipv4 ike”

next edit 23 set status enable set ether-type ipv4 set protocol udp set src-l4port 4500-4500 set comment “ipv4 ike-natt src”

next edit 24 set status enable set ether-type ipv4 set protocol udp set dst-l4port 4500-4500 set comment “ipv4 ike-natt dst”

next edit 25 set status enable set ether-type ipv4 set protocol esp set comment “ipv4 esp”

next edit 26 set status enable set ether-type ipv6 set protocol udp set src-l4port 500-500 set dst-l4port 500-500 set comment “ipv6 ike”

next edit 27 set status enable set ether-type ipv6 set protocol udp set src-l4port 4500-4500 set comment “ipv6 ike-natt src”

next edit 28 set status enable set ether-type ipv6 set protocol udp set dst-l4port 4500-4500 set comment “ipv6 ike-natt dst”

next edit 29 set status enable set ether-type ipv6 set protocol esp set comment “ipv6 esp”

next edit 30 set ether-type ipv4

Configuration synchronization

set protocol icmp set comment “icmp”

next edit 31 set status enable set ether-type ipv6 set protocol icmpv6 set comment “icmpv6”

next edit 32 set ether-type ipv6 set protocol 41

end

Configuration synchronization

The FortiGate-7000 synchronizes the configuration to all modules in the chassis. To support this feature, the interface module in slot 1 becomes the config-sync master and this module makes sure the configurations of all modules are synchronized. Every time you make a configuration change you must be logged into the chassis using the management address, which logs you into the config-sync master. All configuration changes made to the config-sync master are synchronized to all of the modules in the chassis.

If the FIM module in slot 1 fails or reboots, the FIM module in slot 2 becomes the config-sync master.

Failover in a standalone FortiGate-7000

A FortiGate-7000 will continue to operate even if one of the FIM or FPM modules fails or is removed. If an FPM module fails, sessions being processed by that module fail. All sessions are then load balanced to the remaining FPM modules. Sessions that were being processed by the failed module are restarted and load balanced to the remaining FPM modules.

If an FIM module fails, the other FIM module will continue to operate and will become the config-sync master. However, traffic received by the failed FIM module will be lost.

You can use LACP or redundant interfaces to connect interfaces of both FIMs to the same network. In this way, if one of the FIMs fails the traffic will continue to be received by the other FIM module.

Replacing a failed FPM or FIM module

This section describes how to remove a failed FPM or FIM module and replace it with a new one. The procedure is slightly different depending on if you are operating in HA mode with two chassis or just operating a standalone chassis.

Replacing a failed module in a standalone FortiGate-7000 chassis

  1. Power down the failed module by pressing the front panel power button.
  2. Remove the module from the chassis.
  3. Insert the replacement module. It should power up when inserted into the chassis if the chassis has power.

Replacing a failed FPM or FIM module

  1. The module’s configuration is synchronized and its firmware is upgraded to match the firmware version on the primary module. The new module reboots.
  2. If the module will be running FortiOS Carrier, apply the FortiOS Carrier license to the module. The module reboots.
  3. Confirm that the new module is running the correct firmware version either from the GUI or by using the config system status

Manually update the module to the correct version if required. You can do this by logging into the module and performing a firmware upgrade.

  1. Verify that the configuration has been synchronized.

The following command output shows the sync status of the FIM modules in a FortiGate-7000 chassis. The field in_sync=1 indicates that the configurations of the modules are synchronized.

diagnose sys confsync status | grep in_sy

FIM04E3E16000080, Slave, uptime=177426.45, priority=2, slot_id=1:2, idx=0, flag=0x0, in_sync=1

FIM10E3E16000063, Master, uptime=177415.38, priority=1, slot_id=1:1, idx=1, flag=0x0, in_sync=1

If in_sync is not equal to 1 or if a module is missing in the command output you can try restarting the modules in the chassis by entering execute reboot from any module CLI. If this does not solve the problem, contact Fortinet support.

Replacing a failed module in a FortiGate-7000 chassis in an HA cluster

  1. Power down the failed module by pressing the front panel power button.
  2. Remove the module from the chassis.
  3. Insert the replacement module. It should power up when inserted into the chassis if the chassis has power.
  4. The module’s configuration is synchronized and its firmware is upgraded to match the configuration and firmware version on the primary module. The new module reboots.
  5. If the module will be running FortiOS Carrier, apply the FortiOS Carrier license to the module. The module reboots.
  6. Confirm that the module is running the correct firmware version.

Manually update the module to the correct version if required. You can do this by logging into the module and performing a firmware upgrade.

  1. Configure the new module for HA operation. For example:

config system ha set mode a-p set chassis-id 1 set hbdev m1 m2 set hbdev-vlan-id 999 set hbdev-second-vlan-id 990

end

  1. Optionally configure the hostname:

config system global set hostname <name>

end

The HA configuration and the hostname must be set manually because HA settings and the hostname is not synchronized.

  1. Verify that the configuration has been synchronized.

The following command output shows the sync status of the FIM modules in a FortiGate-7000 chassis. The field

Installing firmware on an FIM or FPM module from the BIOS using a TFTP server

in_sync=1 indicates that the configurations of the modules are synchronized.

diagnose sys confsync status | grep in_sy

FIM04E3E16000080, Slave, uptime=177426.45, priority=2, slot_id=1:2, idx=0, flag=0x0, in_sync=1

FIM10E3E16000063, Master, uptime=177415.38, priority=1, slot_id=1:1, idx=1, flag=0x0, in_sync=1

If in_sync is not equal to 1 or if a module is missing in the command output you can try restarting the modules in the chassis by entering execute reboot from any module CLI. If this does not solve the problem, contact Fortinet support.

Installing firmware on an FIM or FPM module from the BIOS using a TFTP server

Use the procedures in this section to install firmware on a FIM or FPM module from a TFTP server after interrupting the boot up sequence from the BIOS.

You might want to use this procedure if you need to reset the configuration of a module to factory defaults by installing firmware from a reboot. You can also use this procedure if you have formatted one or more FIM or FPM modules from the BIOS by interrupting the boot process.

This procedure involves creating a connection between a TFTP server and one of the MGMT interfaces of one of the FIM modules, using a chassis console port to connect to the CLI of the module that you are upgrading the firmware for, rebooting this module, interrupting the boot from the console session, and installing the firmware.

This section includes two procedures, one for upgrading FIM modules and one for upgrading FPM modules. The two procedures are very similar but a few details, most notably the local VLAN ID setting are different. If you need to update both FIM and FPM modules, you should update the FIM modules first as the FPM modules can only communicate with the TFTP server through FIM module interfaces.

Uploading firmware from a TFTP server to an FIM module

Use the following steps to upload firmware from a TFTP server to an FIM module. This procedure requires Ethernet connectivity between the TFTP server and one of the FIM module’s MGMT interfaces.

During this procedure, the FIM module will not be able to process traffic so, if possible, perform this procedure when the network is not processing any traffic.

If you are operating an HA configuration, you should remove the chassis from the HA configuration before performing this procedure.

  1. Set up a TFTP server and copy the firmware file to be installed into the TFTP server default folder.
  2. Set up your network to allow traffic between the TFTP server and one of the MGMT interfaces of the FIM module to be updated.

If the MGMT interface you are using is one of the MGMT interfaces connected as a LAG to a switch you must shutdown or disconnect all of the other connections in the LAG from the switch. This includes the MGMT interfaces in the other FIM module.

  1. Connect the console cable supplied with your chassis to the Console 1 port on your chassis front panel and to your management computer’s RS-232 console port.
  2. Start a terminal emulation program on the management computer. Use these settings: Baud Rate (bps) 9600, Data bits 8, Parity None, Stop bits 1, and Flow Control None.

 

  1. Press Ctrl-T to enter console switch mode.
  2. Repeat pressing Ctrl-T until you have connected to the module to be updated. Example prompt:

<Switching to Console: FIM02 (9600)>

  1. Optionally log into the FIM module’s CLI.
  2. Reboot the FIM module to be updated.

You can do this using the execute reboot command from the CLI or by pressing the power switch on the module front panel.

  1. When the FIM module starts up, follow the boot process in the terminal session and press any key when prompted to interrupt the boot process.
  2. Press C to set up the TFTP configuration.
  3. Use the BIOS menu to set the following. Only change settings if required.

[P]: Set image download port: MGMT1 (change if required)

[D]: Set DHCP mode: Disabled

[I]: Set local IP address: A temporary IP address to be used to connect to the TFTP server. This address must not be the same as the chassis management IP address and cannot conflict with other addresses on your network

[S]: Set local Subnet Mask: Set as required for your network.

[G]: Set local gateway: Set as required for your network.

[V]: Local VLAN ID: Use -1 to clear the Local VLAN ID.

[T]: Set remote TFTP server IP address: The IP address of the TFTP server.

[F]: Set firmware image file name: The name of the firmware file to be installed.

  1. Press Q to quit this menu.
  2. Press R to review the configuration.

If you need to make any corrections, press C and make the changes as required. When the configuration is correct proceed to the next step.

  1. Press T to start the TFTP transfer.

The firmware image is uploaded from the TFTP server and installed on the FIM module which then reboots. When it starts up the module’s configuration is reset to factory defaults. The module’s configuration is synchronized to match the configuration of the primary module. The new module reboots again and can start processing traffic.

  1. Verify that the configuration has been synchronized.

The following command output shows the sync status of the FIM modules in a FortiGate-7000 chassis. The field in_sync=1 indicates that the configurations of the modules are synchronized.

diagnose sys confsync status | grep in_sy

FIM04E3E16000080, Slave, uptime=177426.45, priority=2, slot_id=1:2, idx=0, flag=0x0, in_sync=1

FIM10E3E16000063, Master, uptime=177415.38, priority=1, slot_id=1:1, idx=1, flag=0x0, in_sync=1

If in_sync is not equal to 1 or if a module is missing in the command output you can try restarting the modules in the chassis by entering execute reboot from any module CLI. If this does not solve the problem, contact Fortinet support.

Installing firmware on an FIM or FPM module from the BIOS using a TFTP server

Uploading firmware from a TFTP server to an FPM module

Use the following steps to upload firmware from a TFTP server to an FPM module. This procedure requires Ethernet connectivity between the TFTP server and one of the MGMT interfaces of one of the FIM modules in the same chassis as the FPM module.

During this procedure, the FPM module will not be able to process traffic so, if possible, perform this procedure when the network is not processing any traffic. However, the other FPM modules and the FIM modules in the chassis should continue to operate normally and the chassis can continue processing traffic.

If you are operating an HA configuration, you should remove the chassis from the HA configuration before performing this procedure.

  1. Set up a TFTP server and copy the firmware file to be installed into the TFTP server default folder.
  2. Set up your network to allow traffic between the TFTP server and a MGMT interface of one of the FIM modules in the chassis that also includes the FPM module.

You can use any MGMT interface of either of the FIM modules. If the MGMT interface you are using is one of the MGMT interfaces connected as a LAG to a switch you must shutdown or disconnect all of the other connections in the LAG from the switch. This includes the MGMT interfaces in the other FIM module.

  1. Connect the console cable supplied with your chassis to the Console 1 port on your chassis front panel and to your management computer’s RS-232 console port.
  2. Start a terminal emulation program on the management computer. Use these settings: Baud Rate (bps) 9600, Data bits 8, Parity None, Stop bits 1, and Flow Control None.
  3. Press Ctrl-T to enter console switch mode.
  4. Repeat pressing Ctrl-T until you have connected to the module to be updated. Example prompt:

<Switching to Console: FPM03 (9600)>

  1. Optionally log into the FPM module’s CLI.
  2. Reboot the FPM module to be updated.

You can do this using the execute reboot command from the CLI or by pressing the power switch on the module front panel.

  1. When the FPM module starts up, follow the boot process in the terminal session and press any key when prompted to interrupt the boot process.
  2. Press C to set up the TFTP configuration.
  3. Use the BIOS menu to set the following. Only change settings if required.

[P]: Set image download port: The name of the FIM module that can connect to the TFTP server (FIM01 is the FIM module in slot 1 and FIM02 is the FIM module in slot 2).

[D]: Set DHCP mode: Disabled.

[I]: Set local IP address: A temporary IP address to be used to connect to the TFTP server. This address must not be the same as the chassis management IP address and cannot conflict with other addresses on your network.

[S]: Set local Subnet Mask: Set as required for your network.

[G]: Set local gateway: Set as required for your network.

[V]: Local VLAN ID: The VLAN ID of the FIM interface that can connect to the TFTP server:

FIM01 local VLAN IDs

Interface                MGMT1 MGMT2 MGMT3 MGMT4
Local VLAN ID         11 12 13 14
FIM02 local VLAN IDs      
Interface                MGMT1 MGMT2 MGMT3 MGMT4
Local VLAN ID         21 22 23 24

[T]: Set remote TFTP server IP address: The IP address of the TFTP server.

[F]: Set firmware image file name: The name of the firmware file to be installed.

  1. Press Q to quit this menu.
  2. Press R to review the configuration.

If you need to make any corrections, press C and make the changes as required. When the configuration is correct proceed to the next step.

  1. Press T to start the TFTP transfer.

The firmware image is uploaded from the TFTP server and installed on the FPM module which then reboots. When it starts up the module’s configuration is reset to factory defaults. The module’s configuration is synchronized to match the configuration of the primary module. The new module reboots again and can start processing traffic.

  1. Verify that the configuration has been synchronized.

The following command output shows the sync status of the FIM modules in a FortiGate-7000 chassis. The field in_sync=1 indicates that the configurations of the modules are synchronized.

diagnose sys confsync status | grep in_sy

FIM04E3E16000080, Slave, uptime=177426.45, priority=2, slot_id=1:2, idx=0, flag=0x0, in_sync=1

FIM10E3E16000063, Master, uptime=177415.38, priority=1, slot_id=1:1, idx=1, flag=0x0, in_sync=1

If in_sync is not equal to 1 or if a module is missing in the command output you can try restarting the modules in the chassis by entering execute reboot from any module CLI. If this does not solve the problem, contact Fortinet support.

Operating a FortiGate-7000

$
0
0

Operating a FortiGate-7000

This chapter describes some FortiGate-7000 general operating procedure.

Failover in a standalone FortiGate-7000

A FortiGate-7000 will continue to operate even if one of the FIM or FPM modules fails or is removed. If an FPM module fails, sessions being processed by that module fail. All sessions are then load balanced to the remaining FPM modules. Sessions that were being processed by the failed module are restarted and load balanced to the remaining FPM modules.

If an FIM module fails, the other FIM module will continue to operate and will become the config-sync master. However, traffic received by the failed FIM module will be lost.

You can use LACP or redundant interfaces to connect interfaces of both FIMs to the same network. In this way, if one of the FIMs fails the traffic will continue to be received by the other FIM module.

Replacing a failed FPM or FIM module

This section describes how to remove a failed FPM or FIM module and replace it with a new one. The procedure is slightly different depending on if you are operating in HA mode with two chassis or just operating a standalone chassis.

Replacing a failed module in a standalone FortiGate-7000 chassis

  1. Power down the failed module by pressing the front panel power button.
  2. Remove the module from the chassis.
  3. Insert the replacement module. It should power up when inserted into the chassis if the chassis has power.
  4. The module’s configuration is synchronized and its firmware is upgraded to match the firmware version on the primary module. The new module reboots.
  5. Confirm that the new module is running the correct firmware version either from the GUI or by using the config system status

Manually update the module to the correct version if required. You can do this by logging into the module and performing a firmware upgrade.

  1. Verify that the configuration has been synchronized.

The following command output shows the sync status of the FIM modules in a FortiGate-7000 chassis. The field in_sync=1 indicates that the configurations of the modules are synchronized.

diagnose sys confsync status | grep in_sy

FIM04E3E16000080, Slave, uptime=177426.45, priority=2, slot_id=1:2, idx=0, flag=0x0, in_sync=1

FIM10E3E16000063, Master, uptime=177415.38, priority=1, slot_id=1:1, idx=1, flag=0x0, in_sync=1

If in_sync is not equal to 1 or if a module is missing in the command output you can try restarting the modules in the chassis by entering execute reboot from any module CLI. If this does not solve the problem, contact Fortinet support.

Replacing a failed module in a FortiGate-7000 chassis in an HA cluster

  1. Power down the failed module by pressing the front panel power button.
  2. Remove the module from the chassis.
  3. Insert the replacement module. It should power up when inserted into the chassis if the chassis has power.
  4. The module’s configuration is synchronized and its firmware is upgraded to match the configuration and firmware version on the primary module. The new module reboots.
  5. Confirm that the module is running the correct firmware version.

Manually update the module to the correct version if required. You can do this by logging into the module and performing a firmware upgrade.

  1. Configure the new module for HA operation. For example:

config system ha set mode a-p set chassis-id 1 set hbdev m1 m2 set hbdev-vlan-id 999 set hbdev-second-vlan-id 990

end

  1. Optionally configure the hostname:

config system global set hostname <name>

end

The HA configuration and the hostname must be set manually because HA settings and the hostname is not synchronized.

  1. Verify that the configuration has been synchronized.

The following command output shows the sync status of the FIM modules in a FortiGate-7000 chassis. The field in_sync=1 indicates that the configurations of the modules are synchronized.

diagnose sys confsync status | grep in_sy

FIM04E3E16000080, Slave, uptime=177426.45, priority=2, slot_id=1:2, idx=0, flag=0x0, in_sync=1

FIM10E3E16000063, Master, uptime=177415.38, priority=1, slot_id=1:1, idx=1, flag=0x0, in_sync=1

If in_sync is not equal to 1 or if a module is missing in the command output you can try restarting the modules in the chassis by entering execute reboot from any module CLI. If this does not solve the problem, contact Fortinet support.

Installing firmware on an FIM or FPM module from the BIOS using a TFTP server

Use the procedures in this section to install firmware on a FIM or FPM module from a TFTP server after interrupting the boot up sequence from the BIOS.

Installing firmware on an FIM or FPM module from the BIOS using a TFTP server

You might want to use this procedure if you need to reset the configuration of a module to factory defaults by installing firmware from a reboot. You can also use this procedure if you have formatted one or more FIM or FPM modules from the BIOS by interrupting the boot process.

This procedure involves creating a connection between a TFTP server and one of the MGMT interfaces of one of the FIM modules, using a chassis console port to connect to the CLI of the module that you are upgrading the firmware for, rebooting this module, interrupting the boot from the console session, and installing the firmware.

This section includes two procedures, one for upgrading FIM modules and one for upgrading FPM modules. The two procedures are very similar but a few details, most notably the local VLAN ID setting are different. If you need to update both FIM and FPM modules, you should update the FIM modules first as the FPM modules can only communicate with the TFTP server through FIM module interfaces.

Uploading firmware from a TFTP server to an FIM module

Use the following steps to upload firmware from a TFTP server to an FIM module. This procedure requires Ethernet connectivity between the TFTP server and one of the FIM module’s MGMT interfaces.

During this procedure, the FIM module will not be able to process traffic so, if possible, perform this procedure when the network is not processing any traffic.

If you are operating an HA configuration, you should remove the chassis from the HA configuration before performing this procedure.

  1. Set up a TFTP server and copy the firmware file to be installed into the TFTP server default folder.
  2. Set up your network to allow traffic between the TFTP server and one of the MGMT interfaces of the FIM module to be updated.

If the MGMT interface you are using is one of the MGMT interfaces connected as a LAG to a switch you must shutdown or disconnect all of the other connections in the LAG from the switch. This includes the MGMT interfaces in the other FIM module.

  1. Connect the console cable supplied with your chassis to the Console 1 port on your chassis front panel and to your management computer’s RS-232 console port.
  2. Start a terminal emulation program on the management computer. Use these settings: Baud Rate (bps) 9600, Data bits 8, Parity None, Stop bits 1, and Flow Control None.
  3. Press Ctrl-T to enter console switch mode.
  4. Repeat pressing Ctrl-T until you have connected to the module to be updated. Example prompt:

<Switching to Console: FIM02 (9600)>

  1. Optionally log into the FIM module’s CLI.
  2. Reboot the FIM module to be updated.

You can do this using the execute reboot command from the CLI or by pressing the power switch on the module front panel.

  1. When the FIM module starts up, follow the boot process in the terminal session and press any key when prompted to interrupt the boot process.
  2. Press C to set up the TFTP configuration.
  3. Use the BIOS menu to set the following. Only change settings if required.

[P]: Set image download port: MGMT1 (change if required)

[D]: Set DHCP mode: Disabled

[I]: Set local IP address: A temporary IP address to be used to connect to the TFTP server. This address must not be the same as the chassis management IP address and cannot conflict with other addresses on your network

[S]: Set local Subnet Mask: Set as required for your network.

[G]: Set local gateway: Set as required for your network.

[V]: Local VLAN ID: Use -1 to clear the Local VLAN ID.

[T]: Set remote TFTP server IP address: The IP address of the TFTP server.

[F]: Set firmware image file name: The name of the firmware file to be installed.

  1. Press Q to quit this menu.
  2. Press R to review the configuration.

If you need to make any corrections, press C and make the changes as required. When the configuration is correct proceed to the next step.

  1. Press T to start the TFTP transfer.

The firmware image is uploaded from the TFTP server and installed on the FIM module which then reboots. When it starts up the module’s configuration is reset to factory defaults. The module’s configuration is synchronized to match the configuration of the primary module. The new module reboots again and can start processing traffic.

  1. Verify that the configuration has been synchronized.

The following command output shows the sync status of the FIM modules in a FortiGate-7000 chassis. The field in_sync=1 indicates that the configurations of the modules are synchronized.

diagnose sys confsync status | grep in_sy

FIM04E3E16000080, Slave, uptime=177426.45, priority=2, slot_id=1:2, idx=0, flag=0x0, in_sync=1

FIM10E3E16000063, Master, uptime=177415.38, priority=1, slot_id=1:1, idx=1, flag=0x0, in_sync=1

If in_sync is not equal to 1 or if a module is missing in the command output you can try restarting the modules in the chassis by entering execute reboot from any module CLI. If this does not solve the problem, contact Fortinet support.

Uploading firmware from a TFTP server to an FPM module

Use the following steps to upload firmware from a TFTP server to an FPM module. This procedure requires Ethernet connectivity between the TFTP server and one of the MGMT interfaces of one of the FIM modules in the same chassis as the FPM module.

During this procedure, the FPM module will not be able to process traffic so, if possible, perform this procedure when the network is not processing any traffic. However, the other FPM modules and the FIM modules in the chassis should continue to operate normally and the chassis can continue processing traffic.

If you are operating an HA configuration, you should remove the chassis from the HA configuration before performing this procedure.

  1. Set up a TFTP server and copy the firmware file to be installed into the TFTP server default folder.
  2. Set up your network to allow traffic between the TFTP server and a MGMT interface of one of the FIM modules in the chassis that also includes the FPM module.

You can use any MGMT interface of either of the FIM modules. If the MGMT interface you are using is one of the MGMT interfaces connected as a LAG to a switch you must shutdown or disconnect all of the other connections in the LAG from the switch. This includes the MGMT interfaces in the other FIM module.

Installing firmware on an FIM or FPM module from the BIOS using a TFTP server

  1. Connect the console cable supplied with your chassis to the Console 1 port on your chassis front panel and to your management computer’s RS-232 console port.
  2. Start a terminal emulation program on the management computer. Use these settings: Baud Rate (bps) 9600, Data bits 8, Parity None, Stop bits 1, and Flow Control None.
  3. Press Ctrl-T to enter console switch mode.
  4. Repeat pressing Ctrl-T until you have connected to the module to be updated. Example prompt:

<Switching to Console: FPM03 (9600)>

  1. Optionally log into the FPM module’s CLI.
  2. Reboot the FPM module to be updated.

You can do this using the execute reboot command from the CLI or by pressing the power switch on the module front panel.

  1. When the FPM module starts up, follow the boot process in the terminal session and press any key when prompted to interrupt the boot process.
  2. Press C to set up the TFTP configuration.
  3. Use the BIOS menu to set the following. Only change settings if required.

[P]: Set image download port: The name of the FIM module that can connect to the TFTP server (FIM01 is the FIM module in slot 1 and FIM02 is the FIM module in slot 2).

[D]: Set DHCP mode: Disabled.

[I]: Set local IP address: A temporary IP address to be used to connect to the TFTP server. This address must not be the same as the chassis management IP address and cannot conflict with other addresses on your network.

[S]: Set local Subnet Mask: Set as required for your network.

[G]: Set local gateway: Set as required for your network.

[V]: Local VLAN ID: The VLAN ID of the FIM interface that can connect to the TFTP server:

FIM01 local VLAN IDs

Interface                MGMT1 MGMT2 MGMT3 MGMT4
Local VLAN ID         11 12 13 14
FIM02 local VLAN IDs      
Interface                MGMT1 MGMT2 MGMT3 MGMT4
Local VLAN ID         21 22 23 24

[T]: Set remote TFTP server IP address: The IP address of the TFTP server.

[F]: Set firmware image file name: The name of the firmware file to be installed.

  1. Press Q to quit this menu.
  2. Press R to review the configuration.

If you need to make any corrections, press C and make the changes as required. When the configuration is correct proceed to the next step.

  1. Press T to start the TFTP transfer.

The firmware image is uploaded from the TFTP server and installed on the FPM module which then reboots.

When it starts up the module’s configuration is reset to factory defaults. The module’s configuration is synchronized to match the configuration of the primary module. The new module reboots again and can start processing traffic.

  1. Verify that the configuration has been synchronized.

The following command output shows the sync status of the FIM modules in a FortiGate-7000 chassis. The field in_sync=1 indicates that the configurations of the modules are synchronized.

diagnose sys confsync status | grep in_sy

FIM04E3E16000080, Slave, uptime=177426.45, priority=2, slot_id=1:2, idx=0, flag=0x0, in_sync=1

FIM10E3E16000063, Master, uptime=177415.38, priority=1, slot_id=1:1, idx=1, flag=0x0, in_sync=1

If in_sync is not equal to 1 or if a module is missing in the command output you can try restarting the modules in the chassis by entering execute reboot from any module CLI. If this does not solve the problem, contact Fortinet support.

 

FortiGate 7000 Series IPsec VPN

$
0
0

IPsec VPN

Adding source and destination subnets to IPsec VPN phase 2 configurations

If your FortiGate-7000 configuration includes IPsec VPNs you should enhance your IPsec VPN Phase 2 configurations as described in this section. If your FortiGate-7000 does not include IPsec VPNs you can proceed with a normal firmware upgrade.

Because the FortiGate-7000 only allows 16-bit to 32-bit routes, you must add one or more destination subnets to your IPsec VPN phase 2 configuration for FortiGate-7000 v5.4.5 using the following command:

config vpn ipsec phase2-interface edit “to_fgt2″So set phase1name <name> set src-subnet <IP> <netmask> set dst-subnet <IP> <netmask>

end Where

src-subnet is the subnet protected by the FortiGate that you are configuring and from which users connect to the destination subnet. Configuring the source subnet is optional but recommended.

dst-subnet is the destination subnet behind the remote IPsec VPN endpoint. Configuring the destination subnet is required.

You can add the source and destination subnets either before or after upgrading to v5.4.5 as these settings are compatible with both v5.4.3 and v5.4.5. However, if you make these changes after upgrading, your IPsec VPNs may not work correctly until these configuration changes are made.

Example basic IPsec VPN Phase 2 configuration

In a simple configuration such as the one below with an IPsec VPN between two remote subnets you can just add the subnets to the phase 2 configuration.

Enter the following command to add the source and destination subnets to the FortiGate-7000 IPsec VPN Phase 2 configuration.

config vpn ipsec phase2-interface edit “to_fgt2″So set phase1name “to_fgt2”

set src-subnet 172.16.1.0 255.255.255.0

Adding source and destination subnets to IPsec VPN phase 2 configurations

set dst-subnet 172.16.2.0 255.255.255.0

end

Example multiple subnet IPsec VPN Phase 2 configuration

In a more complex configuration, such as the one below with a total of 5 subnets you still need to add all of the subnets to the Phase 2 configuration. In this case you can create a firewall address for each subnet and the addresses to address groups and add the address groups to the Phase 2 configuration.

Enter the following commands to create firewall addresses for each subnet.

config firewall address edit “local_subnet_1” set subnet 4.2.1.0 255.255.255.0

next

edit “local_subnet_2” set subnet 4.2.2.0 255.255.255.0

next edit “remote_subnet_3”

set subnet 4.2.3.0 255.255.255.0

next edit “remote_subnet_4”

set subnet 4.2.4.0 255.255.255.0

next edit “remote_subnet_5”

set subnet 4.2.5.0 255.255.255.0 end

And then put the five firewall addresses into two firewall address groups.

config firewall addrgrp edit “local_group” set member “local_subnet_1” “local_subnet_2”

next

edit “remote_group” set member “remote_subnet_3” “remote_subnet_4” “remote_subnet_5”

end

Now, use the firewall address groups in the Phase 2 configuration:

Configuring the FortiGate-7000 as a dialup IPsec VPN server

config vpn ipsec phase2-interface edit “to-fgt2” set phase1name “to-fgt2” set src-addr-type name set dst-addr-type name set src-name “local_group” set dst-name “remote_group”

end

Configuring the FortiGate-7000 as a dialup IPsec VPN server

FortiGate-7000s running v5.4.5 can be configured as dialup IPsec VPN servers.

Example dialup IPsec VPN configuration

The following shows how to setup a dialup IPsec VPN configuration where the FortiGate-7000 acts as a dialup IPsec VPN server.

To configure the FortiGate-7000 as a dialup IPsec VPN server:

Configure the phase1, set type to dynamic.

config vpn ipsec phase1-interface edit dialup-server set type dynamic set interface “v0020” set peertype any set psksecret < password>

end

Configure the phase 2, to support dialup IPsec VPN, set the destination subnet to 0.0.0.0 0.0.0.0.

config vpn ipsec phase2-interface edit dialup-server set phase1name dialup-server set src-subnet 4.2.0.0 255.255.0.0 set dst-subnet 0.0.0.0 0.0.0.0 end

Troubleshooting

To configure the remote FortiGate as a dialup IPsec VPN client

The dialup IPsec VPN client should advertise its local subnet(s) using the phase 2 src-subnet option.

If there are multiple local subnets create a phase 2 for each one. Each phase 2 only advertises one local subnet to the dialup IPsec VPN server. If more than one local subnet is added to the phase 2, only the first one is advertised to the server.

Dialup client configuration:

config vpn ipsec phase1-interface edit “to-fgt7k” set interface “v0020” set peertype any set remote-gw 1.2.0.1 set psksecret <password>

end

config vpn ipsec phase2-interface edit “to-fgt7k” set phase1name “to-fgt7k” set src-subnet 4.2.6.0 255.255.255.0 set dst-subnet 4.2.0.0 255.255.0.0

next edit “to-fgt7k-2” set phase1name “to-fgt7k” set src-subnet 4.2.7.0 255.255.255.0 set dst-subnet 4.2.0.0 255.255.0.0

end

Troubleshooting

Use the following commands to verify that IPsec VPN sessions are up and running.

Use the diagnose load-balance status command from the primary FIM interface module to determine the primary FPM processor module. For FortiGate-7000 HA, run this command from the primary FortiGate-7000. The third line of the command output shows which FPM is operating as the primary FPM.

diagnose load-balance status

FIM01: FIM04E3E16000074

Master FPM Blade: slot-4

Slot 3: FPM20E3E17900113

Status:Working    Function:Active

Link:      Base: Up          Fabric: Up

Heartbeat: Management: Good    Data: Good

Status Message:”Running”

Slot 4: FPM20E3E16800033

Status:Working    Function:Active

Link:      Base: Up          Fabric: Up

Heartbeat: Management: Good    Data: Good

Status Message:”Running”

Troubleshooting

FIM02: FIM10E3E16000040

Master FPM Blade: slot-4

Slot 3: FPM20E3E17900113

Status:Working    Function:Active

Link:      Base: Up          Fabric: Up

Heartbeat: Management: Good    Data: Good

Status Message:”Running”

Slot 4: FPM20E3E16800033

Status:Working    Function:Active

Link:      Base: Up          Fabric: Up

Heartbeat: Management: Good    Data: Good

Status Message:”Running”

Log into the primary FPM CLI and run the command diagnose vpn tunnel list <phase2> to show the sessions for the phase 2 configuration. The example below is for the to-fgt2 phase 2 configuration configured previously in this chapter. The command output shows the security association (SA) setup for this phase 2 and the all of the destination subnets .

Make sure the SA is installed (In blue color). And the “dst” are correct (in red color).

CH15 [FPM04] (002ipsecvpn) # diagnose vpn tunnel list name to-fgt2

list ipsec tunnel by names in vd 11

—————————————————–name=to-fgt2 ver=1 serial=2 4.2.0.1:0->4.2.0.2:0

bound_if=199 lgwy=static/1 tun=intf/0 mode=auto/1 encap=none/40 options[0028]=npu ike_assit

proxyid_num=1 child_num=0 refcnt=8581 ilast=0 olast=0 auto-discovery=0 ike_asssit_last_sent=4318202512 stat: rxp=142020528 txp=147843214 rxb=16537003048 txb=11392723577 dpd: mode=on-demand on=1 idle=20000ms retry=3 count=0 seqno=2 natt: mode=none draft=0 interval=0 remote_port=0 proxyid=to-fgt2 proto=0 sa=1 ref=8560 serial=8

src: 0:4.2.1.0/255.255.255.0:0 0:4.2.2.0/255.255.255.0:0 dst: 0:4.2.3.0/255.255.255.0:0 0:4.2.4.0/255.255.255.0:0

0:4.2.5.0/255.255.255.0:0

SA: ref=7 options=22e type=00 soft=0 mtu=9134 expire=42819/0B replaywin=2048 seqno=4a26f esn=0 replaywin_lastseq=00045e80

life: type=01 bytes=0/0 timeout=43148/43200

dec: spi=e89caf36 esp=aes key=16 26aa75c19207d423d14fd6fef2de3bcf ah=sha1 key=20 7d1a330af33fa914c45b80c1c96eafaf2d263ce7

enc: spi=b721b907 esp=aes key=16 acb75d21c74eabc58f52ba96ee95587f ah=sha1 key=20 41120083d27eb1d3c5c5e464d0a36f27b78a0f5a

dec:pkts/bytes=286338/40910978, enc:pkts/bytes=562327/62082855 npu_flag=03 npu_rgwy=4.2.0.2 npu_lgwy=4.2.0.1 npu_selid=b dec_npuid=3 enc_

npuid=1

Log into the CLI of any of the FIM modules and run the command diagnose test application fctrlproxyd 2. The output should show matching destination subnets.

Troubleshooting

diagnose test application fctrlproxyd 2 fcp route dump : last_update_time 24107

Slot:4 routecache entry: (5)

checksum:27 AE 00 EA 10 8D 22 0C D6 48 AB 2E 7E 83 9D 24

vd:3 p1:to-fgt2 p2:to-fgt2 subnet:4.2.3.0 mask:255.255.255.0 enable:1 vd:3 p1:to-fgt2 p2:to-fgt2 subnet:4.2.4.0 mask:255.255.255.0 enable:1 vd:3 p1:to-fgt2 p2:to-fgt2 subnet:4.2.5.0 mask:255.255.255.0 enable:1

=========================================

FortiGate 7000 Series High Availability

$
0
0

High Availability

FortiGate-7000 supports a variation of active-passive FortiGate Clustering Protocol (FGCP) high availability between two identical FortiGate-7000 chassis. With active-passive FortiGate-7000 HA, you create redundant network connections to two identical FortiGate-7000s and add redundant HA heartbeat connections. Then you configure the FIM interface modules for HA. A cluster forms and a primary chassis is selected.

Example FortiGate-7040

All traffic is processed by the primary (or master) chassis. The backup chassis operates in hot standby mode. The configuration, active sessions, routing information, and so on is synchronized to the backup chassis. If the primary chassis fails, traffic automatically fails over to the backup chassis.

The primary chassis is selected based on a number of criteria including the configured priority, the bandwidth, the number of FIM interface failures, and the number of FPM or FIM modules that have failed. As part of the HA configuration you assign each chassis a chassis ID and you can set the priority of each FIM interface module and configure module failure tolerances and the link failure thresholds.

Before you begin configuring HA

Before you begin, the chassis should be running the same FortiOS firmware version and interfaces should not be configured to get their addresses from DHCP or PPPoE. Register and apply licenses to the each FortiGate-7000 Connect the M1 and M2 interfaces for HA heartbeat communication

before setting up the HA cluster. This includes licensing for FortiCare, IPS, AntiVirus, Web Filtering, Mobile Malware, FortiClient, FortiCloud, and additional virtual domains (VDOMs). Both FortiGate-7000s in the cluster must have the same level of licensing for FortiGuard, FortiCloud, FortiClient, and VDOMs. FortiToken licenses can be added at any time because they are synchronized to all cluster members.

If required, you should configure split ports on the FIMs on both chassis before configuring HA because the modules have to reboot after split ports is configured. For example, to split the C1, C2, and C4 interfaces of an FIM-7910E in slot 1, enter the following command:

config system global set split-port 1-C1 2-C1 2-C4

end

After configuring split ports, the chassis reboots and the configuration is synchronized.

On each chassis, make sure configurations of the modules are synchronized before starting to configure HA. You can use the following command to verify that the configurations of all of the modules are synchronized: diagnose sys confsync chsum | grep all all: c0 68 d2 67 e1 23 d9 3a 10 50 45 c5 50 f1 e6 8e all: c0 68 d2 67 e1 23 d9 3a 10 50 45 c5 50 f1 e6 8e all: c0 68 d2 67 e1 23 d9 3a 10 50 45 c5 50 f1 e6 8e all: c0 68 d2 67 e1 23 d9 3a 10 50 45 c5 50 f1 e6 8e all: c0 68 d2 67 e1 23 d9 3a 10 50 45 c5 50 f1 e6 8e all: c0 68 d2 67 e1 23 d9 3a 10 50 45 c5 50 f1 e6 8e all: c0 68 d2 67 e1 23 d9 3a 10 50 45 c5 50 f1 e6 8e all: c0 68 d2 67 e1 23 d9 3a 10 50 45 c5 50 f1 e6 8e

If the modules are synchronized, the checksums displayed should all be the same.

You can also use the following command to list the modules that are synchronized. The example output shows all four FIM modules have been configured for HA and added to the cluster.

diagnose sys configsync status | grep in_sync

Master, uptime=692224.19, priority=1, slot_1d=1:1, idx=0, flag=0x0, in_sync=1

Slave, uptime=676789.70, priority=2, slot_1d=1:2, idx=1, flag=0x0, in_sync=1

Slave, uptime=692222.01, priority=17, slot_1d=1:4, idx=2, flag=0x64, in_sync=1

Slave, uptime=692271.30, priority=16, slot_1d=1:3, idx=3, flag=0x64, in_sync=1 In this command output in_sync=1 means the module is synchronized with the primary unit and in_sync=0 means the module is not synchronized.

Connect the M1 and M2 interfaces for HA heartbeat communication

HA heartbeat communication between chassis happens over the 10Gbit M1 and M2 interfaces of the FIM modules in each chassis. To set up HA heartbeat connections:

l Connect the M1 interfaces of all FIM modules together using a switch. l Connect the M2 interfaces of all FIM modules together using another switch.

All of the M1 interfaces must be connected together with a switch and all of the M2 interfaces must be connected together with another switch. Connecting M1 interfaces or M2 interfaces directly is not supported as each FIM needs to communicate with all other FIMs.

Connect the M1 and M2 interfaces for HA heartbeat communication

Heartbeat packets are VLAN packets with VLAN ID 999 and ethertype 9890. The MTU value for the M1 and M2 interfaces is 1500. You can use the following commands to change the HA heartbeat packet VLAN ID and ethertype values if required for your switches. You must change these settings on each FIM interface module. By default the M1 and M2 interface heartbeat packets use the same VLAN IDs and ethertypes.

config system ha set hbdev-vlan-id <vlan> set hbdev-second-vlan-id <vlan> set ha-eth-type <eth-type> end

Using separate switches for M1 and M2 is recommended for redundancy. It is also recommended that these switches be dedicated to HA heartbeat communication and not used for other traffic.

If you use the same switch for both M1 and M2, separate the M1 and M2 traffic on the switch and set the heartbeat traffic on the M1 and M2 Interfaces to have different VLAN IDs. For example, use the following command to set the heartbeat traffic on M1 to use VLAN ID 777 and the heartbeat traffic on M2 to use VLAN ID 888:

config system ha

set hbdev-vlan-id 777

set hbdev-second-vlan-id 888   end

If you don’t set different VLAN IDs for the M1 and M2 heartbeat packets q-in-q must be enabled on the switch.

Sample switch configuration for a Cisco Catalyst switch. This configuration sets the interface speeds, configures the switch to allow vlan 999, and enables trunk mode:

##interface config interface TenGigabitEthernet1/0/5 description Chassis1 FIM1 M1 switchport trunk allowed vlan 999

switchport mode trunk

If you are using one switch for both M1 and M2 connections, the configuration would be the same except you would add q-in-q support and two different VLANs, one for M1 traffic and one for M2 traffic.

HA configuration

For the M1 connections:

interface Ethernet1/5 description QinQ Test switchportmode dot1q-tunnel switchport access vlan 888 spanning-tree port type edge

For the M2 connections:

interface Ethernet1/5 description QinQ Test switchport mode dot1q-tunnel switchport access vlan 880 spanning-tree port type edge

HA packets must have the configured VLAN tag (default 999). If the switch removes or changes this tag, HA heartbeat communication will not work and the cluster will form a split brain configuration. In effect two clusters will form, one in each chassis, and network traffic will be disrupted.

HA configuration

Use the following steps to setup the configuration for HA between two chassis (chassis 1 and chassis 2). These steps are written for a set of two FortiGate-7040E or 7060Es. The steps are similar for the FortiGate-7030E except that each FortiGate-7030E only has one FIM interface module.

Each FIM interface module has to be configured for HA separately. The HA configuration is not synchronized among FIMs. You can begin by setting up chassis 1 and setting up HA on both of the FIM interfaces modules in it. Then do the same for chassis 2.

Each of the FortiGate-7000s is assigned a chassis ID (1 and 2). These numbers just allow you to identify the chassis and do not influence primary unit selection.

Setting up HA on the FIM interface modules in the first FortiGate-7000 (chassis 1)

  1. Log into the CLI of the FIM interface module in slot 1 (FM01) and enter the following command:

config system ha set mode a-p set password <password> set group-id <id> set chassis-id 1 set hbdev M1/M2

end

This adds basic HA settings to this FIM interface module.

  1. Repeat this configuration on the FIM interface module in slot 2 (FIM02).

config system ha set mode a-p set password <password> set group-id <id> set chassis-id 1 set hbdev M1/M2

HA configuration

end

  1. From either FIM interface module, enter the following command to confirm that the FortiGate-7000 is in HA mode:

diagnose sys ha status

The password and group-id are unique for each HA cluster and must be the same on all FIM modules. If a cluster does not form, one of the first things to check are groupd-id and re-enter the password on both FIM interface modules.

Configure HA on the FIM interface modules in the second FortiGate-7000 (chassis 2)

  1. Repeat the same HA configuration settings on the FIM interfaces modules in the second chassis except set the chassis ID to 2.

config system ha set mode a-p set password <password> set group-id <id> set chassis-id 2 set hbdev M1/M2

end

  1. From any FIM interface module, enter the following command to confirm that the cluster has formed and all of the FIM modules have been added to it:

diagnose sys ha status

The cluster has now formed and you can add the configuration and connect network equipment and start operating the cluster. You can also modify the HA configuration depending on your requirements.

Verifying that the cluster is operating correctly

Enter the following CLI command to view the status of the cluster. You can enter this command from any module’s CLI. The HA members can be in a different order depending on the module CLI from which you enter the command.

If the cluster is operating properly the following command output should indicate the primary and backup (master and slave) chassis as well as primary and backup (master and slave) modules. For each module, the state portion of the output shows all the parameters used to select the primary FIM module. These parameters include the number FPM modules that the FIM module is connecting to that have failed, the status of any link aggregation group (LAG) interfaces in the configuration, the state of the interfaces in the FIM module, the traffic bandwidth score for the FIM module (the higher the traffic bandwidth score the more interfaces are connected to networks, and the status of the management links.

diagnose sys ha status

==========================================================================

Current slot: 1 Module SN: FIM04E3E16000085

Chassis HA mode: a-p

Chassis HA information:

[Debug_Zone HA information]

HA group member information: is_manage_master=1.

FG74E83E16000015: Slave, serialno_prio=1, usr_priority=128, hostname=CH15

FG74E83E16000016: Master, serialno_prio=0, usr_priority=127, hostname=CH16

HA member information:

CH16(FIM04E3E16000085), Master(priority=0), uptime=78379.78, slot=1, chassis=2(2)

HA management configuration

slot: 1, chassis_uptime=145358.97, more: cluster_id:0, flag:1, local_priority:0, usr_priority:127, usr_override:0 state: worker_failure=0/2, lag=(total/good/down/bad-score)=5/5/0/0, intf_state=(port up)=0, force-state(1:force-to-master) traffic-bandwidth-score=120, mgmt-link=1

hbdevs: local_interface=     1-M1 best=yes local_interface=     1-M2 best=no

ha_elbc_master: 3, local_elbc_master: 3

CH15(FIM04E3E16000074), Slave(priority=2), uptime=145363.64, slot=1, chassis=1(2) slot: 1, chassis_uptime=145363.64, more: cluster_id:0, flag:0, local_priority:2, usr_priority:128, usr_override:0 state: worker_failure=0/2, lag=(total/good/down/bad-score)=5/5/0/0, intf_state=(port up)=0, force-state(-1:force-to-slave) traffic-bandwidth-score=120, mgmt-link=1

hbdevs: local_interface=     1-M1 last_hb_time=145640.39 status=alive local_interface=     1-M2 last_hb_time=145640.39 status=alive

CH15(FIM10E3E16000040), Slave(priority=3), uptime=145411.85, slot=2, chassis=1(2) slot: 2, chassis_uptime=145638.51, more: cluster_id:0, flag:0, local_priority:3, usr_priority:128, usr_override:0 state: worker_failure=0/2, lag=(total/good/down/bad-score)=5/5/0/0, intf_state=(port up)=0, force-state(-1:force-to-slave) traffic-bandwidth-score=100, mgmt-link=1

hbdevs: local_interface=     1-M1 last_hb_time=145640.62 status=alive local_interface=     1-M2 last_hb_time=145640.62 status=alive

CH16(FIM10E3E16000062), Slave(priority=1), uptime=76507.11, slot=2, chassis=2(2) slot: 2, chassis_uptime=145641.75, more: cluster_id:0, flag:0, local_priority:1, usr_priority:127, usr_override:0 state: worker_failure=0/2, lag=(total/good/down/bad-score)=5/5/0/0, intf_state=(port up)=0, force-state(-1:force-to-slave) traffic-bandwidth-score=100, mgmt-link=1

hbdevs: local_interface=     1-M1 last_hb_time=145640.39 status=alive local_interface=     1-M2 last_hb_time=145640.39 status=alive

HA management configuration

In HA mode, you should connect the interfaces in the mgmt 802.3 static aggregate interfaces of both chassis to the same switch. You can create one aggregate interface on the switch and connect both chassis management interfaces to it.

Managing individual modules in HA mode

When you browse to the system management IP address you connect to the primary FIM interface module in the primary chassis. Only the primary FIM interface module responds to management connections using the system management IP address. If a failover occurs you can connect to the new primary FIM interface module using the same system management IP address.

Managing individual modules in HA mode

In some cases you may want to connect to an individual FIM or FPM module in a specific chassis. For example, you may want to view the traffic being processed by the FPM module in slot 3 of chassis 2. You can connect to the GUI or CLI of individual modules in the chassis using the system management IP address with a special port number.

For example, if the system management IP address is 1.1.1.1 you can browse to https://1.1.1.1:44323 to connect to the FPM module in chassis 2 slot 3. The special port number (in this case 44323) is a combination of the service port, chassis ID, and slot number. The following table lists the special ports for common admin protocols:

FortiGate-7000 HA special administration port numbers

Chassis and Slot Number Slot Address HTTP

(80)

HTTPS (443) Telnet

(23)

SSH (22) SNMP (161)
Ch1 slot 5 FPM05 8005 44305 2305 2205 16105
Ch1 slot 3 FPM03 8005 44303 2303 2203 16103
Ch1 slot 1 FIM01 8003 44301 2301 2201 16101
Ch1 slot 2 FIM02 8002 44302 2302 2202 16102
Ch1 slot 4 FPM04 8004 44304 2304 2204 16104
Ch1 slot 6 FPM06 8006 44306 2306 2206 16106
Ch2 slot 5 FPM05 8005 44325 2325 2225 16125
Ch2 slot 3 FPM03 8005 44323 2323 2223 16123
Ch2 slot 1 FIM01 8003 44321 2321 2221 16121
Ch2 slot 2 FIM02 8002 44322 2322 2222 16122
Ch2 slot 4 FPM04 8004 44324 2324 2224 16124
Ch2 slot 6 FPM06 8006 44326 2326 2226 16126

For example:

Firmware upgrade

l To connect to the GUI of the FPM module in chassis 1 slot 3 using HTTPS you would browse to https://1.1.1.1:44313. l To send an SNMP query to the FPM module in chassis 2 slot 6 use the port number 16126.

The formula for calculating the special port number is based on Chassis ID. CH1 = Chassis ID1, CH2 = Chassis ID2. The formula is: service_port x 100 + (chassis_id – 1) x 20 + slot_id.

Firmware upgrade

All of the modules in a FortiGate-7000 HA cluster run the same firmware image. You upgrade the firmware from the GUI or CLI by logging into the primary FIM interface module using the system management IP address and uploading the firmware image.

If uninterruptable-upgrade and session-pickup are enabled, firmware upgrades should only cause a minimal traffic interruption. Use the following command to enable these settings (they should be enabled by default). These settings are synchronized to all modules in the cluster. config system ha set uninterruptable-upgrade enable set session-pickup enable

end

When enabled, the primary FIM interface module uploads firmware to all modules, but in this case, the modules in the backup chassis install their new firmware and reboot and rejoin the cluster and resynchronize.

Then all traffic fails over to the backup chassis which becomes the new primary chassis. Then the modules in the new backup chassis upgrade their firmware and rejoin the cluster. Unless override is enabled, the new primary chassis continues to operate as the primary chassis.

Normally you would want to enable uninterruptable-upgrade to minimize traffic interruptions. But unterruptable-upgrade does not have to be enabled. In fact, if a traffic interruption is not going to cause any problems you an disable unterruptable-upgrade so that the firmware upgrade process takes less time.

Session failover (session-pickup)

Session failover means that after a failover, communications sessions resume on the new primary FortiGate7000 with minimal or no interruption. Two categories of sessions need to be resumed after a failover:

l Sessions passing through the cluster l Sessions terminated by the cluster

Session failover (also called session-pickup) is not enabled by default for FortiGate-7000 HA. If sessions pickup is enabled, while the FortiGate-7000 HA cluster is operating the primary FortiGate-7000 informs the backup FortiGate-7000 of changes to the primary FortiGate-7000 connection and state tables for TCP and UDP sessions passing through the cluster, keeping the backup FortiGate-7000 up-to-date with the traffic currently being processed by the cluster.

Session failover (session-pickup)

After a failover the new primary FortiGate-7000 recognizes open sessions that were being handled by the cluster. The sessions continue to be processed by the new primary FortiGate-7000 and are handled according to their last known state.

Session-pickup has some limitations. For example, session failover is not supported for sessions being scanned by proxy-based security profiles. Session failover is supported for sessions being scanned by flow-based security profiles; however, flowbased sessions that fail over are not inspected after they fail over.

Session terminated by the cluster include management sessions (such as HTTPS connections to the FortiGate

GUI or SSH connection to the CLI as well as SNMP and logging and so on). Also included in this category are IPsec VPN, SSL VPN, sessions terminated by the cluster, and explicit proxy sessions. In general, whether or not session-pickup is enabled, these sessions do not failover and have to be restarted.

Enabling session pickup for TCP and UDP

To enable session-pickup, from the CLI enter:

config system ha set session-pickup enable

end

When session-pickup is enabled, sessions in the primary FortiGate-7000 TCP and UDP session tables are synchronized to the backup FortiGate-7000. As soon as a new TCP or UDP session is added to the primary FortiGate-7000 session table, that session is synchronized to the backup FortiGate-7000. This synchronization happens as quickly as possible to keep the session tables synchronized.

If the primary FortiGate-7000 fails, the new primary FortiGate-7000 uses its synchronized session tables to resume all TCP and UDP sessions that were being processed by the former primary FortiGate-7000 with only minimal interruption. Under ideal conditions all TCP and UDP sessions should be resumed. This is not guaranteed though and under less than ideal conditions some sessions may need to be restarted.

If session pickup is disabled

If you disable session pickup, the FortiGate-7000 HA cluster does not keep track of sessions and after a failover, active sessions have to be restarted or resumed. Most session can be resumed as a normal result of how TCP and UDP resumes communication after any routine network interruption.

If you do not require session failover protection, leaving session pickup disabled may reduce CPU usage and reduce HA heartbeat network bandwidth usage. Also if your FortiGate-7000 HA cluster is mainly being used for traffic that is not synchronized (for example, for proxy-based security profile processing) enabling session pickup is not recommended since most sessions will not be failed over anyway.

Primary unit selection and failover criteria

Primary unit selection and failover criteria

Once two FortiGate-7000s recognize that they can form a cluster, they negotiate to select a primary chassis. Primary selection occurs automatically based on the criteria shown below. After the cluster selects the primary, the other chassis becomes the backup.

Negotiation and primary chassis selection also takes place if the one of the criteria for selecting the primary chassis changes. For example, an interface can become disconnected or module can fail. After this happens, the cluster can renegotiate to select a new primary chassis also using the criteria shown below.

Primary unit selection and failover criteria

How link and module failures affect primary chassis selection

If there are no failures and if you haven’t configured any settings to influence primary chassis selection, the chassis with the highest serial number to becomes the primary chassis.

Using the serial number is a convenient way to differentiate FortiGate-7000 chassis; so basing primary chassis selection on the serial number is predictable and easy to understand and interpret. Also the chassis with the highest serial number would usually be the newest chassis with the most recent hardware version. In many cases you may not need active control over primary chassis selection, so basic primary chassis selection based on serial number is sufficient.

In some situations you may want have control over which chassis becomes the primary chassis. You can control primary chassis selection by setting the priority of one chassis to be higher than the priority of the other. If you change the priority of one of the chassis, during negotiation, the chassis with the highest priority becomes the primary chassis. As shown above, FortiGate-7000 FGCP selects the primary chassis based on priority before serial number. For more information about how to use priorities, see High Availability on page 57.

Chassis uptime is also a factor. Normally when two chassis start up their uptimes are similar and do not affect primary chassis selection. However, during operation, if one of the chassis goes down the other will have a much higher uptime and will be selected as the primary chassis before priorty and serial number are tested.

Verifying primary chassis selection

You can use the diagnose sys ha status command to verify which chassis has become the primary chassis as shown by the following command output example. This output also shows that the chassis with the highest serial number was selected to be the primary chassis.

diagnose sys ha status

==========================================================================

Current slot: 1 Module SN: FIM04E3E16000085 Chassis HA mode: a-p

Chassis HA information:

[Debug_Zone HA information]

HA group member information: is_manage_master=1.

FG74E83E16000015: Slave, serialno_prio=1, usr_priority=128, hostname=CH15

FG74E83E16000016: Master, serialno_prio=0, usr_priority=127, hostname=CH16

How link and module failures affect primary chassis selection

The total number of connected data interfaces in a chassis has a higher priority than the number of failed modules in determining which chassis in a FortiGate-7000 HA configuration is the primary chassis. For example, if one chassis has a failed FPM module and the other has a disconnected or failed data interface, the chassis with the failed processor module becomes the primary unit.

For another example, the following diagnose sys ha status command shows the HA status for a cluster where one chassis has a disconnected or failed data interface and the other chassis has a failed FPM module.

diagnose sys ha status

==========================================================================

Slot: 2 Module SN: FIM01E3E16000088 Chassis HA mode: a-p

Chassis HA information:

How link and module failures affect primary chassis selection

[Debug_Zone HA information]

HA group member information: is_manage_master=1.

FG74E33E16000027: Master, serialno_prio=0, usr_priority=128, hostname=Chassis-K FG74E13E16000072: Slave, serialno_prio=1, usr_priority=128, hostname=Chassis-J

HA member information:

Chassis-K(FIM01E3E16000088), Slave(priority=1), uptime=2237.46, slot=2, chassis=1(1) slot: 2, chassis_uptime=2399.58,

state: worker_failure=1/2, lag=(total/good/down/bad-score)=2/2/0/0, intf_state=(port up)=0, force-state(0:none) traffic-bandwidth-score=20, mgmt-link=1

hbdevs: local_interface= 2-M1 best=yes local_interface= 2-M2 best=no

Chassis-J(FIM01E3E16000031), Slave(priority=2), uptime=2151.75, slot=2, chassis=2(1) slot: 2, chassis_uptime=2151.75,

state: worker_failure=0/2, lag=(total/good/down/bad-score)=2/2/0/0, intf_state=(port up)=0, force-state(0:none) traffic-bandwidth-score=20, mgmt-link=1

hbdevs: local_interface= 2-M1 last_hb_time= 2399.81 status=alive local_interface= 2-M2 last_hb_time= 0.00 status=dead

Chassis-J(FIM01E3E16000033), Slave(priority=3), uptime=2229.63, slot=1, chassis=2(1) slot: 1, chassis_uptime=2406.78,

state: worker_failure=0/2, lag=(total/good/down/bad-score)=2/2/0/0, intf_state=(port up)=0, force-state(0:none) traffic-bandwidth-score=20, mgmt-link=1

hbdevs: local_interface= 2-M1 last_hb_time= 2399.81 status=alive local_interface= 2-M2 last_hb_time= 0.00 status=dead

Chassis-K(FIM01E3E16000086), Master(priority=0), uptime=2203.30, slot=1, chassis=1(1) slot: 1, chassis_uptime=2203.30,

state: worker_failure=1/2, lag=(total/good/down/bad-score)=2/2/0/0, intf_state=(port up)=1, force-state(0:none) traffic-bandwidth-score=30, mgmt-link=1

hbdevs: local_interface= 2-M1 last_hb_time= 2399.74 status=alive local_interface= 2-M2 last_hb_time= 0.00 status=dead

This output shows that chassis 1 (hostname Chassis-K) is the primary or master chassis. The reason for this is that chassis 1 has a total traffic-bandwidth-score of 30 + 20 = 50, while the total trafficbandwidth-score for chassis 2 (hostname Chassis-J) is 20 + 20 = 40.

The output also shows that both FIM modules in chassis 1 are detecting a worker failure (worker_ failure=1/2) while both FIM modules in chassis 2 are not detecting a worker failure worker_ failure=0/2). The intf-state=(port up)=1 field shows that FIM module in slot 1 of chassis 1 has one more interface connected than the FIM module in slot 1 of chassis 2. It is this extra connected interface that gives the FIM module in chassis 1 slot 1 the higher traffic bandwidth score than the FIM module in slot 1 of chassis 2.

One of the interfaces on the FIM module in slot 1 of chassis 2 must have failed. In a normal HA configuration the FIM modules in matching slots of each chassis should have redundant interface connections. So if one module has fewer connected interfaces this indicates a link failure.

Link failure threshold and board failover tolerance

FIM module failures

If an FIM module fails, not only will HA recognize this as a module failure it will also give the chassis with the failed FIM module a much lower traffic bandwidth score. So an FIM module failure would be more likely to cause an HA failover than a FPM module failover.

Also, the traffic bandwidth score for an FIM module with more connected interfaces would be higher than the score for an FIM module with fewer connected interfaces. So if a different FIM module failed in each chassis, the chassis with the functioning FIM module with the most connected data interfaces would have the highest traffic bandwidth score and would become the primary chassis.

Management link failures

Management connections to a chassis can affect primary chassis selection. If the management connection to one chassis become disconnected a failover will occur and the chassis that still has management connections will become the primary chassis.

Link failure threshold and board failover tolerance

The default settings of the link failure threshold and the board failover tolerance result in the default link and module failure behavior. You can change these settings if you want to modify this behavior. For example, if you want a failover to occur if an FPM module fails, even if an interface has failed you can increase the board failover tolerance setting.

Link failure threshold

The link failure threshold determines how many interfaces in a link aggregation interface (LAG) can be lost before the LAG interface is considered down. The chassis with the most connected LAGs becomes the primary chassis. if a LAG goes down the cluster will negotiate and may select a new primary chassis. You can use the following command to change the link failure threshold:

config system ha set link-failure-threshold <threshold>

end

The threshold range is 0 to 80 and 0 is the default.

A threshold of 0 means that if a single interface in any LAG fails the LAG the considered down. A higher failure threshold means that more interfaces in a LAG can fail before the LAG is considered down. For example, if the threshold is set to 1, at least two interfaces will have to fail.

Board failover tolerance

You can use the following command to configure board failover tolerance.

config system ha set board-failover-tolerance <tolerance>

end

The tolerance range is 0 to 12, and 0 is the default.

Priority and primary chassis selection

A tolerance of 0 means that if a single module fails in the primary chassis a failover occurs and the chassis with the fewest failed modules becomes the new primary chassis. A higher failover tolerance means that more modules can fail before a failover occurs. For example, if the tolerance is set to 1, at least two modules in the primary chassis will have to fail before a failover occurs.

Priority and primary chassis selection

You can select a chassis to become the primary chassis by setting the HA priority of one of one or more of its FIM modules (for example, the FIM module in slot 1) higher than the priority of the other FIM modules. Enter the following command to set the HA priority:

config system ha set priority <number>

end

The default priority is 128.

The chassis with the highest total FIM module HA priority becomes the primary chassis.

Override and primary chassis selection

Enabling override changes the order of primary chassis selection. If override is enabled, primary chassis selection considers priority before chassis up tme and serial number. This means that if you set the device priority higher for one chassis, with override enabled this chassi becomes the primary chassis even if its uptime and serial number are lower than the other chassis.

Enter the following command to enable override.

config system ha set override enable

end

When override is enabled primary unit selection checks the traffic bandwidth score, aggregate interface state, management interface links and FPM module failures first. So any of these factors affect primary chassis selection, even if override is enabled.

FortiGate-7000 v5.4.5 special features and limitations

$
0
0

FortiGate-7000 v5.4.5 special features and limitations

This section describes special features and limitations for FortiGate-7000 v5.4.5.

Managing the FortiGate-7000

Management is only possible through the MGMT1 to MGMT4 front panel management interfaces. By default the MGMT1 to MGMT4 interfaces of the FIM modules in slot 1 and slot 2 are in a single static aggregate interface named mgmt with IP address 192.168.1.99. You manage the FortiGate-7000 by connecting any one of these eight interfaces to your network, opening a web browser and browsing to https://192.168.1.99.

Default management VDOM

By default the FortiGate-7000 configuration includes a management VDOM named dmgmt-vdom. For the

FortiGate-7000 system to operate normally you should not change the configuration of this VDOM and this VDOM should always be the management VDOM. You should also not add or remove interfaces from this VDOM.

You have full control over the configurations of other FortiGate-7000 VDOMs.

Firewall

TCP sessions with NAT enabled that are expected to be idle for more than the distributed processing normal TCP timer (which is 3605 seconds) should only be distributed to the master FPM using a flow rule. You can configure the distributed normal TCP timer using the following command:

config system global set dp-tcp-normal-timer <timer>

end

UDP sessions with NAT enabled that are expected to be idle for more than the distributed processing normal UDP timer should only be distributed to the primary FPM using a flow rule.

IP Multicast

IPv4 and IPv6 Multicast traffic is only sent to the primary FPM module (usually the FPM in slot 3). This is controlled by the following configuration:

config load-balance flow-rule edit 18

High Availability                                                                                              v5.4.5

set status enable set vlan 0 set ether-type ipv4 set src-addr-ipv4 0.0.0.0 0.0.0.0 set dst-addr-ipv4 224.0.0.0 240.0.0.0 set protocol any set action forward set forward-slot master set priority 5 set comment “ipv4 multicast”

next edit 19 set status enable set vlan 0 set ether-type ipv6 set src-addr-ipv6 ::/0 set dst-addr-ipv6 ff00::/8 set protocol any set action forward set forward-slot master set priority 5 set comment “ipv6 multicast”

end

High Availability

Only the M1 and M2 interfaces are used for the HA heartbeat communication.

When using both M1 and M2 for the heartbeat, FortiGate-7000 v5.4.5 requires two switches. The first switch to connect all M1 ports together. The second second switch to connect all M2 ports together. This is because the same VLAN is used for both M1 and M2 and the interface groups should remain in different broadcast domains.

Using a single switch for both M1 and M2 heartbeat traffic is possible if the switch supports q-in-q tunneling. In this case use different VLANs for M1 traffic and M2 traffic to keep two separated broadcast domains in the switch.

The following FortiOS HA features are not supported or are supported differently by FortiGate-7000 v5.4.5:

  • Remote IP monitoring (configured with the option pingserver-monitor-interface and related settings) is not supported
  • Active-active HA is not supported l The range for the HA group-id is 0 to 14. l Failover logic for FortiGate-7000 v5.4.5 HA is not the same as FGSP for other FortiGate clusters. l HA heartbeat configuration is specific to FortiGate-7000 systems and differs from standard HA.
  • FortiGate Session Life Support Procotol (FGSP) HA (also called standalone session synchronization) is not supported.

Shelf Manager Module

It is not possible to access SMM CLI using Telnet or SSH. Only console access is supported using the chassis front panel console ports as described in the FortiGate-7000 system guide.

 

FortiGate-7000 v5.4.5 special features and limitations

FortiOS features that are not supported by FortiGate-7000 v5.4.5

 

For monitoring purpose, IPMI over IP is supported on SMM Ethernet ports. See your FortiGate-7000 system guide for details.

FortiOS features that are not supported by FortiGate-7000 v5.4.5

The following mainstream FortiOS 5.4.5 features are not supported by the FortiGate-7000 v5.4.5:

  • Hardware switch l Switch controller l WiFi controller l WAN load balancing (SD-WAN) l IPv4 over IPv6, IPv6 over IPv4, IPv6 over IPv6 features l GRE tunneling is only supported after creating a load balance flow rule, for example:

config load-balance flow-rule edit 0 set status enable set vlan 0 set ether-type ip set protocol gre set action forward set forward-slot master set priority 3

end

  • Hard disk features including, WAN optimization, web caching, explicit proxy content caching, disk logging, and GUIbased packet sniffing. l Log messages should be sent only using the management aggregate interface

IPsec VPN tunnels terminated by the FortiGate-7000

This section lists FortiGate-7000 limitations for IPsec VPN tunnels terminated by the FortiGate-7000:

  • Interface-based IPsec VPN is recommended. l Policy based IPsec VPN is supported, but requires creating flow-rules for each Phase 2 selector. l Dynamic routing and policy routing is not supported for IPsec interfaces.
  • IPsec static routes don’t consider distance, weight, priority settings. IPsec static routes are always installed in the routing table, regardless of the tunnel state.
  • IPsec tunnels are not load-balanced across the FPMs, all IPsec tunnel sessions are sent to the primary FPM module.
  • IPsec VPN dialup or dynamic tunnels require a flow rule that sends traffic destined for IPsec dialup IP pools to the primary FPM module.
  • In an HA configuration, IPsec SAs are not synchronized to the backup chassis. IPsec SAs are re-negociated after a failover.

SSL VPN                                                                                                       v5.4.5

SSL VPN

Sending all SSL VPN sessions to the primary FPM module is recommended. You can do this by:

  • Creating a flow rule that sends all sessions that use the SSL VPN destination port and IP address to the primary FPM module.
  • Creating flow rules that send all sessions that use the SSL VPN IP pool addresses to the primary FPM module.

Traffic shaping and DDoS policies

Each FPM module applies traffic shaping and DDoS quotas independently. Because of load-balancing, this may allow more traffic than expected.

Sniffer mode (one-arm sniffer)

One-arm sniffer mode is only supported after creating a load balance flow rule to direct sniffer traffic to a specific FPM module.

FortiGuard Web Filtering

All FortiGuard rating queries are sent through management aggregate interface from the management VDOM (named dmgmt-vdom).

Log messages include a slot field

An additional “slot” field has been added to log messages to identify the FPM module that generated the log.

FortiOS Carrier

You have to apply a FortiOS Carrier license separately to each FIM and FPM module to license a FortiGate-7000 chassis for FortiOS Carrier.

Special notice for new deployment connectivity testing

Only the primary FPM module can successfully ping external IP addresses. During a new deployment, while performing connectivity testing from the Fortigate-7000, make sure to run execute ping tests from the primary FPM module CLI.

What I learned at Accelerate 18

$
0
0

What I Learned at Accelerate18

I was incredibly blessed to get the opportunity to go to Las Vegas this year for the Fortinet Accelerate 18 conference. For those that don’t know, this conference is the Fortinet Conference where they unveil all the goodies, provide excellent hands on trainings, and give the clients, partners, and distributors the unique opportunity to mingle, get to know each other, and more importantly put faces to names for people that could have been working together for years and never got the face to face time they normally would have.

As always, this event was a blast. Obviously, any event that takes place in Sin City is going to be a fun adventure for any red-blooded male that has a few bucks and some time to kill but let’s face it, that could probably be said about any major city these days.

This little post is going to be a summary of the things I consider to be the most important that I learned at the conference this year. This is purely subjective and geared more towards my interests so you may have differences of opinion.

FortiOS 6

Just as FortiOS 5.6 was starting to get stable enough to use Fortinet has unleashed FortiOS 6 which is going to bring a plethora of new features and capabilities. A lot of you will get a kick out of the revamped SD-WAN capabilities that make the functionality far superior to existing iterations. Not to mention the incredible visibility enhancements that are going to make your ability to decipher what is truly taking place on your network much easier.

FortiGate 6000 Series

The 7000 series (I’m running a 7060E) is an incredible piece of machinery. The 6000 series is going to provide excellent performance but in an appliance form. So, while you may be looking at doing some data center consolidation or space reduction this is definitely going to be the edge “top of rack” style FortiGate that you are going to look at. The Chassis are large, and with real estate being a premium, this appliance is really going to be a great replacement for those that aren’t looking to grow into the device (most chassis clients approach)

Wie Ling Neo Is Super Intelligent

Ok, so I didn’t learn this at Accelerate18. I got to learn this directly by having some discussions with her while troubleshooting some 7060E issues. Wie Ling is the product manager for the 5k chassis, 6k appliance, and 7k chassis. Words can not describe how sharp this woman is. If you ever get the opportunity to sit down with her and discuss FortiGate architecture, why they do what they do, and how they work in general you will be in for a treat.

The Fabric Is Growing

The direction the company is taking with the security fabric is incredible. When the fabric first came out I was skeptical. I thought to myself, “ahh another fabric/API/thing that is never going to be used. Well, I was wrong. Fortinet has taken this initiative and ran with it and the things that are coming out of the developer’s labs are just getting better and better. Automated responses, incident response readiness, it’s all going to be great.


FortiGate-7000 v5.4.3 special features and limitations

$
0
0

FortiGate-7000 v5.4.3 special features and limitations

This section describes special features and limitations for FortiGate-7000 v5.4.3.

Managing the FortiGate-7000

Management is only possible through the MGMT1 to MGMT4 front panel management interfaces. By default the MGMT1 to MGMT4 interfaces of the FIM modules in slot 1 and slot 2 are in a single static aggregate interface named mgmt with IP address 192.168.1.99. You manage the FortiGate-7000 by connecting any one of these eight interfaces to your network, opening a web browser and browsing to https://192.168.1.99.

Default management VDOM

By default the FortiGate-7000 configuration includes a management VDOM named dmgmt-vdom. For the

FortiGate-7000 system to operate normally you should not change the configuration of this VDOM and this VDOM should always be the management VDOM. You should also not add or remove interfaces from this VDOM.

You have full control over the configurations of other FortiGate-7000 VDOMs.

Firewall

TCP sessions with NAT enabled that are expected to be idle for more than the distributed processing normal TCP timer (which is 3605 seconds) should only be distributed to the master FPM using a flow rule. You can configure the distributed normal TCP timer using the following command:

config system global set dp-tcp-normal-timer <timer>

end

UDP sessions with NAT enabled that are expected to be idle for more than the distributed processing normal UDP timer should only be distributed to the primary FPM using a flow rule.

Link monitoring and health checking

ICMP-based link monitoring for SD-WAN, ECMP, HA link monitoring, and firewall session load balancing monitoring (or health checking) is not supported. Using TCP or UDP options for link monitoring instead. v5.4.3          IP Multicast

IP Multicast

IPv4 and IPv6 Multicast traffic is only sent to the primary FPM module (usually the FPM in slot 3). This is controlled by the following configuration:

config load-balance flow-rule edit 18 set status enable set vlan 0 set ether-type ipv4 set src-addr-ipv4 0.0.0.0 0.0.0.0 set dst-addr-ipv4 224.0.0.0 240.0.0.0 set protocol any set action forward set forward-slot master set priority 5 set comment “ipv4 multicast”

next edit 19 set status enable set vlan 0 set ether-type ipv6 set src-addr-ipv6 ::/0 set dst-addr-ipv6 ff00::/8 set protocol any set action forward set forward-slot master set priority 5 set comment “ipv6 multicast”

end

High Availability

Only the M1 and M2 interfaces are used for the HA heartbeat communication.

When using both M1 and M2 for the heartbeat, FortiGate-7000 v5.4.3 requires two switches. The first switch to connect all M1 ports together. The second second switch to connect all M2 ports together. This is because the same VLAN is used for both M1 and M2 and the interface groups should remain in different broadcast domains.

Using a single switch for both M1 and M2 heartbeat traffic is possible if the switch supports q-in-q tunneling. In this case use different VLANs for M1 traffic and M2 traffic to keep two separated broadcast domains in the switch.

The following FortiOS HA features are not supported or are supported differently by FortiGate-7000 v5.4.3:

  • Remote IP monitoring (configured with the option pingserver-monitor-interface and related settings) is not supported
  • Active-active HA is not supported l The range for the HA group-id is 0 to 14. l Failover logic for FortiGate-7000 v5.4.3 HA is not the same as FGSP for other FortiGate clusters. l HA heartbeat configuration is specific to FortiGate-7000 systems and differs from standard HA.

Shelf Manager Module

  • FortiGate Session Life Support Procotol (FGSP) HA (also called standalone session synchronization) is not supported.

Shelf Manager Module

It is not possible to access SMM CLI using Telnet or SSH. Only console access is supported using the chassis front panel console ports as described in the FortiGate-7000 system guide.

For monitoring purpose, IPMI over IP is supported on SMM Ethernet ports. See your FortiGate-7000 system guide for details.

FortiOS features that are not supported by FortiGate-7000 v5.4.3

The following mainstream FortiOS 5.4.3 features are not supported by the FortiGate-7000 v5.4.3:

  • Hardware switch l Switch controller l WiFi controller l WAN load balancing (SD-WAN) l IPv4 over IPv6, IPv6 over IPv4, IPv6 over IPv6 features l GRE tunneling is only supported after creating a load balance flow rule, for example:

config load-balance flow-rule edit 0 set status enable set vlan 0 set ether-type ip set protocol gre set action forward set forward-slot master set priority 3

end

  • Hard disk features including, WAN optimization, web caching, explicit proxy content caching, disk logging, and GUIbased packet sniffing. l Log messages should be sent only using the management aggregate interface

IPsec VPN tunnels terminated by the FortiGate-7000

This section lists FortiGate-7000 limitations for IPsec VPN tunnels terminated by the FortiGate-7000:

  • Interface-based IPsec VPN is recommended. l Policy based IPsec VPN is supported, but requires creating flow-rules for each Phase 2 selector. l Dynamic routing and policy routing is not supported for IPsec interfaces. l Remote network subnets are limited to /16 prefix.
  • IPsec static routes don’t consider distance, weight, priority settings. IPsec static routes are always installed in the routing table, regardless of the tunnel state.

v5.4.3                                                                                                                            SSL VPN

  • IPsec tunnels are not load-balanced across the FPMs, all IPsec tunnel sessions are sent to the primary FPM module.
  • IPsec VPN dialup or dynamic tunnels require a flow rule that sends traffic destined for IPsec dialup IP pools to the primary FPM module.
  • In an HA configuration, IPsec SAs are not synchronized to the backup chassis. IPsec SAs are re-negociated after a failover.

More about IPsec VPN routing limitations

For IPv4 traffic, FortiGate-7000s can only recognize netmasks with 16-bit or 32-bit netmasks. For example:

The following netmasks are supported:

  • 34.0.0/24 l 12.34.0.0 255.255.0.0 l 12.34.56.0/21 l 12.34.56.0 255.255.248.0 l 12.34.56.78/32 l 12.34.56.78 255.255.255.255
  • 34.56.78 (for single IP addresses, FortiOS automatically uses 32-bit netmasks) The following netmasks are not supported:
  • 34.0.0/15 (netmask is less than 16-bit) l 12.34.0.0 255.254.0.0 (netmask is less than 16-bit) l 12.34.56.1-12.34.56.100 (ip range is not supported) l 12.34.56.78 255.255.220.0 (invalid netmask)

SSL VPN

Sending all SSL VPN sessions to the primary FPM module is recommended. You can do this by:

  • Creating a flow rule that sends all sessions that use the SSL VPN destination port and IP address to the primary FPM module.
  • Creating flow rules that send all sessions that use the SSL VPN IP pool addresses to the primary FPM module. Authentication

This section lists FortiGate-7000 authentication limitations:

  • Active authentication that requires a user to manually log into the FortiGate firewall can be problematic because the user may be prompted for credentials more than once as sessions are distributed to different FPM modules. You can avoid this by changing the load distribution method to src-ip.
  • FSSO is supported. Each FPM independently queries the server for user credentials.
  • RSSO is only supported after creating a load balance flow rule to broadcast RADIUS accounting messages to all FPM modules.

Traffic shaping and DDoS policies

Traffic shaping and DDoS policies

Each FPM module applies traffic shaping and DDoS quotas independently. Because of load-balancing, this may allow more traffic than expected.

Sniffer mode (one-arm sniffer)

One-arm sniffer mode is only supported after creating a load balance flow rule to direct sniffer traffic to a specific FPM module.

FortiGuard Web Filtering

All FortiGuard rating queries are sent through management aggregate interface from the management VDOM (named dmgmt-vdom).

Log messages include a slot field

An additional “slot” field has been added to log messages to identify the FPM module that generated the log.

FortiOS Carrier

FortiOS Carrier is supported by the FortiGate-7000 v5.4.3 but GTP load balancing is not supported.

You have to apply a FortiOS Carrier license separately to each FIM and FPM module to license a FortiGate-7000 chassis for FortiOS Carrier.

Special notice for new deployment connectivity testing

Only the primary FPM module can successfully ping external IP addresses. During a new deployment, while performing connectivity testing from the Fortigate-7000, make sure to run execute ping tests from the primary FPM module CLI.

 

FortiGate-7000 Load balancing commands

$
0
0

FortiGate-7000 Load balancing commands

The most notable difference between a FortiGate-7000 and other FortiGates are the commands described in this section for configuring load balancing. The following commands are available:

config load-balance flow-rule config load-balance setting

In most cases you do not have to use these commands. However, they are available to customize some aspects of load balancing.

config load-balance flow-rule

Use this command to add flow rules that add exceptions to how matched traffic is processed by a FortiGate-7000. Specifically you can use these rules to match a type of traffic and control whether the traffic is forwarded or blocked. And if the traffic is forwarded you can specify whether to forward the traffic to a specific FPM or to all FPMs. Unlike firewall policies, load-balance rules are not stateful so for bi-directional traffic, you may need to define two flow rules to match both traffic directions (forward and reverse).

One common use of this command is to control how traffic that is not load balanced is handled. For example, use the following command to send all GRE traffic to the processor module in slot 4. In this example the GRE traffic is received by FortiGate-7000 front panel ports 1C1 and 1C5:

config load-balance flow-rule edit 0 set src-interface 1c1 1c5 set ether-type ip set protocol gre set action forward set forward-slot 4

end

The default configuration includes a number of flow rules that send traffic such as BGP traffic, DHCP traffic and so on to the primary worker. This is traffic that cannot be load balanced and is then just processed by the primary worker.

Syntax

config load-balance flow-rule edit 0 set status {disable | enable}

set src-interface <interface-name> [interface-name>…} set vlan <vlan-id> set ether-type {any | arp | ip | ipv4} set src-addr-ipv4 <ip-address> <netmask> set dst-addr-ipv4 <ip-address> <netmask> set src-addr-ipv6 <ip-address> <netmask> set dst-addr-ipv6 <ip-address> <netmask> set protocol {any | icmp | tcp | udp | igmp | sctp | gre | esp } ah | ospf | pim | vrrp}

set src-l4port <start>[-<end>] set dst-l4port <start>[-<end>]

config load-balance flow-rule                                                                    FortiGate-7000 Load balancing commands

set action {forward | mirror-ingress | mirror-egress | stats | drop} set mirror-interface <interface-name>

set forward-slot {master | all | load-balance | FPM3 | FMP4} set priority <number> set comment <text>

end

status {disable | enable}

Enable or disable this flow rule. Default for a new flow-rule is disable.

src-interface <interface-name> [interface-name>…}

The names of one or more FIM interface front panel interfaces accepting the traffic to be subject to the flow rule.

vlan <vlan-id>

If the traffic matching the rule is VLAN traffic, enter the VLAN ID used by the traffic.

ether-type {any | arp | ip | ipv4 | ipv6}

The type of traffic to be matched by the rule. You can match any traffic (the default) or just match ARP, IP, or

IPv4 traffic.

{src-addr-ipv4 | dst-addr-ipv4 | src-addr-ipv6 | dst-addr-ipv6} <ip-address> <netmask>

The source and destination address of the traffic to be matched. The default of 0.0.0.0 0.0.0.0 matches all traffic.

protocol {any | icmp | tcp | udp | igmp | sctp | gre | esp | ah | ospf | pim | vrrp}

If ether-type is set to ip, ipv4 or ipv6 specify the protocol of the IP or IPv4 traffic to match the rule. The default is any.

{src-l4port | dst-l4port} <start>[-<end>]

Specify a source port range and a destination port range. This option appears for some protocol settings. For example if protocol is set to tcp or udp. The default range is 0-0.

action {forward | mirror-ingress | mirror-egress | stats | drop}

How to handle matching packets. They can be dropped, forwarded to another destination or you can record statistics about the traffic for later analysis. You can combine two or three settings in one command for example you can set action to both forward and stats to forward traffic and collect statistics about it. Use append to add multiple options.

The default action is forward.

The mirror-ingress option copies (mirrors) all ingress packets that match this flow rule and sends them to the interface specified with the mirror-interface option.

config load-balance setting

The mirror-egress option copies (mirrors) all egress packets that match this flow rule and sends them to the interface specified with the mirror-interface option.

set mirror-interface <interface-name>

The name of the interface to send packets matched by this flow-rule when action is set to mirror-ingress or mirroregress.

forward-slot {master | all | load-balance | FPM3 | FPM4 | FPM5 | FPM6}

The worker that you want to forward the traffic that matches this rule to. master forwards the traffic the worker that is operating as the primary worker (usually the FPM module in slot 3. All means forward the traffic to all workers. load-balance means use the default load balancing configuration to handle this traffic. FPM3, FPM4, FPM5 and FPM3 allow you to forward the matching traffic to a specific FPM module. FPM3 is the FPM module in slot 3. FPM4 is the FPM module in slot for. And so on. priority <number>

Set the priority of the flow rule in the range 1 (highest priority) to 10 (lowest priority). Higher priority rules are matched first. You can use the priority to control which rule is matched first if you have overlapping rules.

comment <text>

Optionally add a comment that describes the rule.

config load-balance setting

Use this command to set a wide range of load balancing settings.

config load-balance setting set gtp-load-balance {disable | enable} set max-miss-heartbeats <heartbeats> set max-miss-mgmt-heartbeats <heartbeats> set weighted-load-balance {disable | enable}

set dp-load-distribution-method {round-robin | src-ip | dst-ip | src-dst-ip | src-ipsport | dst-ip-dport | src-dst-ip-sport-dport}

config workers edit 3 set status enable set weight 5

end

end

gtp-load-balance {disable | enable}

Enable GTP load balancing for FortiGate-7000 configurations licensed for FortiOS Carrier.

config load-balance setting                                                                      FortiGate-7000 Load balancing commands

max-miss-heartbeats <heartbeats>

Set the number of missed heartbeats before a worker is considered to have failed. If this many heartbeats are not received from a worker, this indicates that the worker is not able to process data traffic and no more traffic will be sent to this worker.

The time between heartbeats is 0.2 seconds. Range is 3 to 300. 3 means 0.6 seconds, 10 (the default) means 2 seconds, and 300 means 60 seconds. max-miss-mgmt-heartbeats <heartbeats>

Set the number of missed management heartbeats before a worker is considering to have failed. If a management heartbeat fails, there is a communication problem between a worker and other workers. This communication problem means the worker may not be able to synchronize configuration changes, sessions, the kernel routing table, the bridge table and so on with other workers. If a management heartbeat failure occurs, no traffic will be sent to the worker.

The time between management heartbeats is 1 second. Range is 3 to 300 seconds. The default is 20 seconds. weighted-load-balance {disable | enable}

Enable weighted load balancing depending on the slot weight. Use the config slot command to set the weight for each slot.

dp-load-distribution-method {round-robin | src-ip | dst-ip | src-dst-ip | src-ip-sport | dst-ipdport | src-dst-ip-sport-dport}

Set the method used to distribute sessions among workers. Usually you would only need to change the method if you had specific requirements or you found that the default method wasn’t distributing sessions in the manner that you would prefer. The default is src-dst-ip-sport-dport which means sessions are identified by their source address and port and destination address and port. round-robin Directs new requests to the next slot regardless of response time or number of connections. src-ip traffic load is distributed across all slots according to source IP address. dst-ip traffic load is statically distributed across all slots according to destination IP address. src-dst-ip traffic load is distributed across all slots according to the source and destination IP addresses. src-ip-sport traffic load is distributed across all slots according to the source IP address and source port.

dst-ip-dport traffic load is distributed across all slots according to the destination IP address and destination port.

src-dst-ipsport-dport traffic load is distributed across all slots according to the source and destination IP address, source port, and destination port. This is the default load balance schedule and represents true sessionaware load balancing.

config workers

Set the weight and enable or disable each worker. Use the edit command to specify the slot the worker is installed in. You can enable or disable each worker and set each worker’s weight.

config load-balance setting

The weight range is 1 to 10. 5 is average, 1 is -80% of average and 10 is +100% of average. The weights take effect if weighted-loadbalance is enabled.

config workers edit 3 set status enable set weight 5 end

FPM-7620E processing module

$
0
0

FPM-7620E processing module

The FPM-7620E processing module is a high-performance worker module that processes sessions load balanced to it by FortiGate-7000 series interface (FIM) modules over the chassis fabric backplane. The FPM-7620E can be installed in any FortiGate-7000 series chassis in slots 3 and up.

The FPM-7620E includes two 80Gbps connections to the chassis fabric backplane and two 1Gbps connections to the base backplane. The FPM-7620E processes sessions using a dual CPU configuration, accelerates network traffic processing with 4 NP6 processors and accelerates content processing with 8 CP9 processors. The NP6 network processors are connected by the FIM switch fabric so all supported traffic types can be fast path accelerated by the NP6 processors.

The FPM-7620E includes the following hardware features:

  • Two 80Gbps fabric backplane channels for load balanced sessions from the FIM modules installed in the chassis. l Two 1Gbps base backplane channels for management, heartbeat and session sync communication. l Dual CPUs for high performance operation. l Four NP6 processors to offload network processing from the CPUs. l Eight CP9 processors to offload content processing and SSL and IPsec encryption from the CPUs. FPM-7620E front panel
  • Power button. l NMI switch (for troubleshooting as recommended by Fortinet Support). l Mounting hardware.
  • LED status indicators.

4

FPM-7620E processing module                                                                                               Physical Description

Physical Description

Dimensions 1.2 x 11.34 x 14 in. (3.1 x 28.8 x 35.1 cm) (Height x Width x Depth)
Weight 7.2 lb. (3.23 kg)
Operating Temperature 32 to 104°F (0 to 40°C)
Storage Temperature -31 to 158°F (-35 to 70°C)
Relative Humidity 10% to 90% non-condensing

Front Panel LEDs

LED              State Description
STATUS Off The FPM-7620E is powered off.
Green The FPM-7620E is powered on and operating normally.
Flashing Green The FPM-7620E is starting up.
ALARM Red Major alarm.
Amber Minor alarm
Off No alarms
POWER Green The FPM-7620E is powered on and operating normally.
Off The FPM-7620E is powered off.

Turning the module on and off

You can use the front panel power button to turn the module power on or off. If the module is powered on, press the power switch to turn it off. If the module is turned off and installed in a chassis slot, press the power button to turn it on.

NMI switch                                                                                                            FPM-7620E processing module

NMI switch

When working with Fortinet Support to troubleshoot problems with the FPM-7620E you can use the front panel non-maskable interrupt (NMI) switch to assist with troubleshooting. Pressing this switch causes the software to dump registers/backtraces to the console. After the data is dumped the board reboots. While the board is rebooting, traffic is temporarily blocked. The board should restart normally and traffic can resume once its up and running.

NP6 network processors – offloading load balancing and network traffic

The four FPM-7620E NP6 network processors combined with the FIM module integrated switch fabric (ISF) provide hardware acceleration by offloading load balancing from the FPM-7620E CPUs. The result is enhanced network performance provided by the NP6 processors plus the network processing load is removed from the CPU. The NP6 processor can also handle some CPU intensive tasks, like IPsec VPN encryption/decryption. Because of the integrated switch fabric, all sessions are fast-pathed and accelerated.

6

FPM-7620E processing module                          Accelerated IPS, SSL VPN, and IPsec VPN (CP9 content processors)

Accelerated IPS, SSL VPN, and IPsec VPN (CP9 content processors)

The FPM-7620E includes eight CP9 processors that provide the following performance enhancements:

  • Flow-based inspection (IPS, application control etc.) pattern matching acceleration with over 10Gbps throughput l IPS pre-scan l IPS signature correlation l Full match processors
  • High performance VPN bulk data engine l IPsec and SSL/TLS protocol processor l DES/3DES/AES128/192/256 in accordance with FIPS46-3/FIPS81/FIPS197 l MD5/SHA-1/SHA256/384/512-96/128/192/256 with RFC1321 and FIPS180 l HMAC in accordance with RFC2104/2403/2404 and FIPS198 l ESN mode
  • GCM support for NSA “Suite B” (RFC6379/RFC6460) including GCM-128/256; GMAC-128/256
  • Key Exchange Processor that supports high performance IKE and RSA computation l Public key exponentiation engine with hardware CRT support l Primary checking for RSA key generation l Handshake accelerator with automatic key material generation l True Random Number generator l Elliptic Curve support for NSA “Suite B” l Sub public key engine (PKCE) to support up to 4096 bit operation directly (4k for DH and 8k for RSA with CRT)
  • DLP fingerprint support l TTTD (Two-Thresholds-Two-Divisors) content chunking l Two thresholds and two divisors are configurable

 

FPM-7620E mounting components

Hardware installation

This chapter describes installing a FPM-7620E processing module into a FortiGate-7000 chassis.

FPM-7620E mounting components

To install a FPM-7620E you slide the module into slot 3 or up in the front of an FortiGate-7000 series chassis and then use the mounting components to lock the module into place in the slot. When locked into place and positioned correctly the module front panel is flush with the chassis front panel. The module is also connected to the chassis backplane.

To position the module correctly you must use the mounting components shown below for the right of the FPM7620E front panel. The mounting components on the left of the front panel are the same but reversed. The FPM7620E mounting components align the module in the chassis slot and are used to insert and eject the module from the slot.

                                                       Open                                                Closed

(when open the latch slides up about 2 mm)

The FPM-7620E handles align the module in the chassis slot and are used to insert and eject the module from the slot. The latches activate micro switches that turn on or turn off power to the module. When both latches are raised the module cannot receive power. When the latches are fully closed if the module is fully inserted into a chassis slot the module can receive power.

Inserting a FPM-7620E module into a chassis

This section describes how to install a FPM-7620E module into a FortiGate-7000 series chassis slot 3 or up.

You must carefully slide the module all the way into the chassis slot, close the handles to seat the module into the slot, and tighten the retention screws to make sure the module is fully engaged with the backplane and secured. You must also make sure that the sliding latches are fully closed by gently pushing them down. The handles must be closed, the retention screws tightened and the latches fully closed for the module to get power and start up. If the module is not receiving power all LEDs remain off.

FPM-7620Es are hot swappable. The procedure for inserting a FPM-7620E into a chassis slot is the same whether or not the chassis is powered on.

To insert a FPM-7620E into a chassis slot

Do not carry the FPM-7620E by holding the handles or retention screws. When inserting or removing the FPM-7620E from a chassis slot, handle the module by the front panel. The handles are not designed for carrying the board. If the handles become bent or damaged the FPM-7620E may not align correctly in the chassis slot.

To complete this procedure, you need: l A FPM-7620E

  • A FortiGate-7000 chassis with an empty hub/switch slot
  • An electrostatic discharge (ESD) preventive wrist strap with connection cord

FPM-7620Es must be protected from static discharge and physical shock. Only handle or work with FPM-7620Es at a static-free workstation. Always wear a grounded electrostatic discharge (ESD) preventive wrist strap when handling FPM-7620Es. Attach the ESD wrist strap to your wrist and to an ESD socket or to a bare metal surface on the chassis or frame. (An ESD wrist strap is not visible in the photographs below because they were taken in an ESD safe lab environment.)

Inserting a FPM-7620E module into a chassis

  1. Remove the FPM-7620E module from its packaging. Align the module with the chassis slot and slide the module part way into the slot.

In the photograph the FPM-7620E is being installed into chassis slot 4 of a FortiGate-7040E chassis.

  1. Unlock the left and right handles by pushing the handle latches up about 2 mm until the handles pop open.

Fully open both handles before sliding the module into the chassis to avoid damaging the handle mechanism.

Damaging the handles may prevent the module from connecting to power.

  1. Carefully slide the module into the slot until the handles engage with the sides of the chassis slot, partially closing the them.

Insert the module by applying moderate force to the front faceplate (not the handles) to slide the module into the slot. The module should glide smoothly into the chassis slot. If you encounter any resistance while sliding the module in, the module could be aligned incorrectly. Pull the module back out and try inserting it again.

Inserting a FPM-7620E module into a chassis

  1. Push both handles closed and close the latches.

Closing the handles draws the module into place in the chassis slot and into full contact with the chassis backplane. The module front panel should be in contact with the chassis front panel and the latches should drop down and lock into place. You should gently push the latches down to make sure they lock. The module will not receive power until the latches are fully locked.

  1. Tighten both retention screws to secure the module in the chassis.

You can tighten the retention screws by hand with a Phillips screwdriver. If you use a power screwdriver the tightening torque needs to be adjusted between 3 In-lb to 4 In-lb (0.4 N-m to 0.48 N-m).

As the latches are locked, power is supplied to the module. If the chassis is powered on during insertion the status LED flashes green as the module starts up. Once the board has started up and is operating correctly, the front panel LEDs are lit for normal operation.

Normal LED operation

LED   State
Status   Green
Alarm   Off
Power   Green

Shutting down and removing a FPM-7620E board from a chassis

Shutting down and removing a FPM-7620E board from a chassis

To avoid potential hardware problems, always shut down the FPM-7620E operating system properly before removing the FPM-7620E from a chassis slot or before powering down the chassis.

Disconnect all cables from the FPM-7620E module, including all network cables and USB cables or keys.

FPM-7620Es are hot swappable. The procedure for removing a FPM-7620E from a chassis slot is the same whether or not the chassis is powered on.

To remove a FPM-7620E board from a chassis slot

Do not carry the FPM-7620E by holding the handles or retention screws. When inserting or removing the FPM-7620E from a chassis slot, handle the module by the front panel. The handles are not designed for carrying the board. If the handles become bent or damaged theFPM-7620E may not align correctly in the chassis slot.

To complete this procedure, you need:

l A FortiGate-7000 chassis with a FPM-7620E module installed l An electrostatic discharge (ESD) preventive wrist strap with connection cord

FPM-7620Es must be protected from static discharge and physical shock. Only handle or work with FPM-7620Es at a static-free workstation. Always wear a grounded electrostatic discharge (ESD) preventive wrist strap when handling FPM-7620Es. (An ESD wrist strap is not visible in the photographs below because they were taken in an

ESD safe lab environment.)

 

Shutting down and removing                    board from

  1. Fully loosen the retention screws.

You must fully loosen the screws or the handles may be damaged when used to eject the board from the chassis slot.

  1. Unlock the left and right handles by pushing the latches up about 2 mm until the handles pop open.

Shutting down and removing a FPM-7620E board from a chassis

  1. Fully open the handles to eject the module from the chassis.

You need to open the handles with moderate force to eject the module from the chassis.

  1. Hold the module front panel sides and slide it part way out of the slot. Then grasp the module by the sides and carefully slide it out of the slot.

Troubleshooting

Troubleshooting

This section describes some common troubleshooting topics:

FPM-7620E does not startup

Positioning of FPM-7620E handles and a few other causes may prevent a FPM-7620E from starting up correctly.

Latches and handles not fully closed

If the latches or handles are damaged or positioned incorrectly the FPM-7620E may not start up. Make sure the latches are fully closed and the handles are correctly aligned, fully inserted and locked and the retention screws are tightened.

Firmware problem

If the FPM-7620E is receiving power and the latches are handles are fully closed, and you have restarted the chassis and the FPM-7620E still does not start up, the problem could be with FortiOS. Connect to the FPM7620E console and try cycling the power to the board. If the BIOS starts up, interrupt the BIOS startup and install a new firmware image.

If this does not solve the problem, contact Fortinet Technical Support.

FPM-7620E status LED is flashing during system operation

Normally, the FPM-7620E Status LED is off when the FPM-7620E is operating normally. If this LED starts flashing while the module is operating, a fault condition may exist. At the same time the FPM-7620E may stop processing traffic.

To resolve the problem you can try removing and reinserting the FPM-7620E in the chassis slot. Reloading the firmware may also help.

If this does not solve the problem there may have been a hardware failure or other problem. Contact Fortinet Technical Support for assistance.

What’s New in AV Engine 5.355

$
0
0

What’s New in AV Engine 5.355

New features

  • Support for opening ACE, ISO, and CRX compression formats. l New Content Disarm and Reconstruction (CDR) feature. l Script checksum support for HTML files.
  • Support for hidden zlib files in Object Linking and Embedding (OLE) content. l New scan timeout control framework.

Enhancements

  • Content Pattern Recognition Language (CPRL) signature runtime performance improvements. l Win32 emulator optimization. l APK and ZIP decompression optimization. l Accelerated checksum calculation.
  • File typing supports more file types including Dotnet, CHM, Mach-O, DMG and XAR, and RTF. l Script file typing improvements.

AV Engine for FortiOS and FortiAP-S Release Notes                                                                                             5

Fortinet Technologies Inc.

Fortinet Product Support                                                                                         Product Integration and Support

Product Integration and Support

Fortinet Product Support

The following table lists AV engine product integration and support information:

FortiOS 5.4.0 and later

5.6.0 and later

FortiAP-S 5.4.0 and later

5.6.0 and later

6                                                                                             AV Engine for FortiOS and FortiAP-S Release Notes

Fortinet Technologies Inc.

Resolved Issues                                                                                                                                   AV engine

Resolved Issues

The resolved issues listed below do not list every bug that has been corrected with this release. For inquires about a particular bug, please contact Customer Service & Support.

AV engine

Bug ID Description
453487 Add support for gzip files with flag’s reserved bits set
453982 Apply more signatures on RTF files.
413069 Fixed a crash in the JS emulator.
421545 Fixed a signature loading failure bug on FortiOS SOC3 platforms.
  Fixed potential memory issues found by fuzzing in GZIP, CAB and HTML parsing.
413625 Fixed Win32Emulator performance down bug.
  Fixed memory leaks and overflows in pyarch, sis, and rar decompression.
  Fixed potential memory bugs in autoit, arj and aspack decompression.
440519 Flag UPX as archive bomb if the decompressed size is 100 times greater than original file size.
  Fixed AV engine X86_64 crash on Windows 10 build 1703.

FortiOS

Bug ID Description
467820 Fixed missing file names for RAR v5.0.
458192 MSI and KGB file types are now on the list to be sent to FortiSandbox as potentially suspicious files.

FortiOS IPS Engine version 3.443

$
0
0

Introduction

This document provides the following information for FortiOS IPS Engine version 3.443.

Bug ID Description
443479 Support for FortiSandbox Sniffer user defined file extensions.

l What’s New in IPS Engine 3.443 l Product Integration and Support l Resolved Issues

For additional FortiOS documentation, see the Fortinet Document Library.

What’s New in IPS Engine 3.443

Product Integration and Support

Fortinet Product Support

The following table lists IPS engine product integration and support information:

FortiOS 5.2.0 and later

5.4.0 and later

5.6.0 and later

FortiClient 5.4.0 and later (Windows and Mac)

5.6.0 and later (Windows and Mac)

 

 

Resolved Issues

The resolved issues listed below do not list every bug that has been corrected with this release. For inquires about a particular bug, please contact Customer Service & Support.

Bug ID Description
446858 Fixed a crash caused by a NULL pointer de-reference.
445900

446782

Fixed two SSL deep inspection bugs.
444268 Fix IPS engine high CPU usage caused by TCP RST packets with data.
444811 Fix a crash in the IPS HTTP decoder on some proxy traffic. Fixed IPS_CONTEXT_URI_ DECODED context field_start and field_end value for proxy traffic.
440277 Fixed a random detection miss, and a random crash in SSL packet scanning.
411415 Support session clearing by VDOM.
379449 Updated the Brotli library to match the version used by Chromium 61.
450442 Fixed crashes caused by configuration errors in IPS sensors.
444237 Fixed two bugs in the SMB2 decoder that may cause high memory usage.
403562 Fixed a bug that could cause FortiOS to enter conserve mode because of memory corruption.
451677 Fixed a bug that caused the IPS engine to incorrectly identify Phoenix PACS traffic as BitTorrent traffic.
451763 Fixed a bug that caused the IPS engine to drop STUN packets because they were identified as partial SSL records.
460391 Fix crashes in the update_ftp_scan_ret function.
448646 Fix high CPU usage caused by retransmission bugs.
450693

460635

Fixed a bug that caused the ERR_SSL_DECRYPT_ERROR_ALERT message when SSL deep scanning is enabled.

FortiWLC 8.4.0 Release Notes

$
0
0

Getting Started with Upgrade

The following table describes the approved upgrade path applicable for all controllers except the new virtual controllers.

 

NOTE:

FortiWLC-1000D and FortiWLC-3000D controllers can be upgraded only from 8.3 releases.

Supported Upgrade Releases

 

From FortiWLC release… To FortiWLC Release…
7.0 7.0-10-0
8.0 8.0-5-0, 8.0-6-0
8.1 8.1-3-2
8.2 8.2.4
8.2.4/8.3 8.3.1
7.0.11, 8.2.7, 8.3.0, 8.3.1, and 8.3.2 8.3.3
7.0-11, 8.2.7, 8.3.0, 8.3.1, 8.3.2, 8.3.3 8.4.0

 

NOTE:

  • Fortinet recommends that while upgrading 32-bit controllers to version 8.4.0, use the upgrade controller command instead of the upgrade system
  • Controller upgrade performed via CLI interface will require a serial or SSH2 connection to connect to the controller and use its CLI. FortiWLC-1000D and FortiWLC-3000D controller upgrades can be performed via GUI as well.

 

Check Available Free Space

Total free space required is the size of the image + 50MB (approximately 230 MB).  You can use the show file systems command to verify the current disk usage.

 

controller# show file systems

Filesystem     1K-blocks   Used        Available   Use%   Mounted on /dev/hdc2      428972      227844   178242      57%      /

none               4880           56            4824           2%       /dev/shm

 

The first partition in the above example, /hdc2, although the actual name will vary depending on the version of FortiWLC-SD installed on the controller is the one that must have ample free space.

 

In the example above, the partition shows 178242KB of free space (shown bolded above), which translates to approximately 178MB. If your system does not have at least 230MB (230000KB) free, use the delete flash:<flash> command to free up space by deleting older flash files until there is enough space to perform the upgrade (on some controllers, this may require deleting the flash file for the current running version).

 

Set up Serial Connection

Set the serial connection for the following options:

 

 

NOTE:

Only one terminal session is supported at a time. Making multiple serial connections causes signalling conflicts, resulting in damage or loss of data.

 

  • Baud–115200
  • Data–8 bits
  • Parity–None
  • Stop Bit—1
  • Flow Control—None

 

Supported Hardware and Software

This table lists the supported hardware and software versions in this release of FortiWLC.

 

Hardware and

Software

Supported Unsupported
Access Points AP122

AP822e, AP822i (v1 &

v2) AP832e, AP832i,

OAP832e

AP332e*

AP332i*

AP433e*

AP433i*

OAP433e*

FAP-U421EV

FAP-U423EV

FAP-U321EV

FAP-U323EV

FAP-U422EV

 

FAP U221EV

FAP U223EV

FAP U24JEV

AP1010e*

AP1010i*

AP1020e*

AP1020i*

AP1014i*

AP110*

 

AP201

AP208

AP150

AP300, AP301,

AP302, AP302i,

AP301i

AP310, AP311, AP320,

AP310i, AP320i

OAP180

OAP380

*Cannot be configured as a relay AP
Controllers FortiWLC-50D

FortiWLC -200D

FortiWLC -500D

FortiWLC- 1000D

FortiWLC -3000D#

FWC- VM-50#

FWC –VM-200#

FWC –VM-500#

FWC –VM-1000#

FWC-VM-3000#

MC3200, MC3200-VE

MC1550, MC1550-VE

MC6000

MC4200 (with or without 10G Module)

MC4200-VE

MC 5000

MC 4100

MC 1500

MC 1500-VE

 

 

#Spectrum Manager NOT supported in these controller models.
FortiWLM 8.3.3/8.4  
FortiConnect 16.8.2  
Browsers    
FortiWLC (SD) WebUI Internet Explorer 9,10

Mozilla Firefox 25+

Google Chrome

31+

 
  NOTE:  

ation of Firefox 3.0 and 3.5+ prevents the display of the X-axis legend of dashboard

.

A limit graphs
Captive Portal Internet Explorer 6, 7, 8, 9, 10, IE11 and Edge.

Apple Safari

Google Chrome

Mozilla Firefox 4.x and earlier

Mobile devices (such as Apple iPhone and BlackBerry)

 

 

 

Installing and Upgrading

Follow this procedure to upgrade FortiWLC-50D, FortiWLC-200D, FortiWLC-500D, MC1550, MC1550-VE, MC3200, MC3200-VE, MC4200, MC4200-VE and MC6000 controllers. See section Upgrading FortiWLC-1000D and FortiWLC-3000D to upgrade FortiWLC-1000D and FortiWLC-3000D. See Upgrading Virtual Controllers to upgrade virtual controllers.

 

 

  1. Download image files from the remote server to the controller using one of the following commands:

# copy ftp://ftpuser:<password@ext-ip-addr>/<image-name-rpm.tar><space>.

 

[OR]  

 

# copy tftp://<ext-ip-addr>/<image-name-rpm.tar><space>.

 

Where

 

  • image-name for legacy controllers: meru-{release-version}-{hardware-model}rpm.tar. Eg, meru-8.3-3-MC4200-rpm.tar
  • image-name for FortiWLC: forti-{release-version}-{hardware-model}-rpm.tar. Eg, forti-8.3-3-FWC2HD-rpm.tar

 

  1. Disable AP auto upgrade and then upgrade the controller (in config mode)

# auto-ap-upgrade disable

 

# copy running-config startup-config

 

# upgrade controller <target version> (Example, upgrade controller 8.3)

 

The show flash command displays the version details.

 

  1. Upgrade the APs

# upgrade ap same all

 

After the APs are up, use the show controller and show ap command to ensure that the controller and APs are upgraded to the latest (upgraded) version. Ensure that the system configuration is available in the controller using the show running‐config command (if not, recover from the remote location). See the Backup Running Configuration step.

 

Upgrading FortiWLC-1000D and FortiWLC-3000D

To upgrade to FortiWLC-1000D and FortiWLC-3000D, use the following instructions.

 

In version 8.4.0, the image naming systems have been changed for 64 bit controller models from Primary/Secondary to image0/image1. This change applies to the upgrade procedure in the related FortiWLC GUI screens and CLI commands.

 

Upgrading via CLI

  1. Use the show imagesc ommand to view the available images in the controller. By default, a new controller will boot from the primary partition which contains the running image.

default(15)# show images

Running image: Primary   <—- Denotes Primary Partition

——————————————————————————– Running image details.

         System version: 0.3.2

         System hash: 11af7a3f3a700d3c8335dc254165282a91bd021b

         System branch: master

         System built: 20170323191620

         System memory: 721M/1006M

         Apps version: 8.3-1build-15

         Apps size: 1204M/1822M

——————————————————————————– Other image details.

         System version: 0.3.3

         System hash: 4699cb9f517c4a2abbbce458f689bf3558b5d65e

         System branch: master

         System built: 20170511180827

         System memory: 729M/1015M

         Apps version: 8.3-1build-21

         Apps size: 1119M/1821M

 

  1. To install the latest release, download the release image using the upgrade-image     command:

 

upgrade-image scp://<username>@<remote-server-ip>:<path-to-image>/<image- name>-rpm.tar both 

     reboot

 

The above command will upgrade the secondary partition and the controller will reboot to secondary partition.

 

NOTE:

After an upgrade the current partition will shift to the second partition. For example, if you started upgrade in primary partition, post upgrade the default partition becomes secondary partition and vice- versa.

 

default(15)# show images

Running image: Secondary  ß— Current partition after upgrade

——————————————————————————-

Running image details.

         System version: 0.3.2

         System hash: 11af7a3f3a700d3c8335dc254165282a91bd021b

         System branch: master

         System built: 20170323191620

         System memory: 729M/1015M

         Apps version: 8.3-1build-20

         Apps size: 1116M/1821M

——————————————————————————-

Other image details.

         System version: 0.3.2

         System hash: 11af7a3f3a700d3c8335dc254165282a91bd021b

         System branch: master

         System built: 20170323191620

         System memory: 721M/1006M

         Apps version: 8.3-1build-15

              

             Apps size: 1204M/1822M

 

 

 

 

                 

Upgrading via GUI

This section describes the upgrade procedure through the FortiWLC GUI.

 

NOTE:

  • Standalone controllers running pre-8.3.3 FortiWLC (except version 0-12) are required to upgrade to 8.3.3 GA and then to the current 8.4.0 version.

Fortinet recommends upgrading via CLI to avoid this issue which occurs due to file size limitation.

  • This issue does not exist on controllers with manufacturing build as 8.3.3 GA.

 

  1. To upgrade controllers using GUI, navigate to Maintenance > File Management > SD Version.
  2. Click Import button to choose the image file.

 

NOTE:

FortiWLC release 8.4.0 introduces software upgrades using the .fwlc format. This format will be supported in the forthcoming releases.

Direct upgrade from a pre-8.4.0 to 8.4.0 release using the .fwlc format is not supported.

 

 

  1. After the import is complete, a success message is displayed.

 

 

Switching Partitions

To switch partitions in FortiWLC-1000D, FortiWLC-3000D and the new virtual controllers, select the partition during the bootup process.

 

Upgrading 32-bit 8.3.3 Controllers (MC models, FortiWLC50D/200D/500D) with AP832/822 (without KRACK patch)

Upgrading from FortiWLC 8.3.3 to 8.4.0 results in runtime1 image corruption in AP832 and AP822v1. This is due to a resource leak in the 8.3.3 version which is fixed in later releases.

 

Follow these steps to upgrade from 8.3.3 to 8.4.0.

  1. Reboot the APs before upgrade.
  2. Run the upgrade controller command to upgrade controllers.
  3. Once the controller is online, upgrade the APs in batches. Before initiating upgrade, ensure all APs are rebooted so that the uptime is less than 5 hours.

 

NOTE:

Fortinet recommends that you upgrade the 8.3.3 32-bit controller before upgrading the access points due to the issue mentioned in this section.

If KRACK patch is installed on the 8.3.3 32-bit controller then this recommendation does not apply. The controller can be directly upgraded to 8.4.

Upgrading a N+1 Site

To upgrade a site running N+1, all controllers must be on the same FortiWLC-SD version and the backup controller must be in the same subnet as the primary controllers.

 

NOTE:

  • 64-bit controllers running pre-8.3.3 FortiWLC (except version 0-12) are required to upgrade to the 8.3.3 GA version and then to the current 8.4.0 version.
  • When upgraded to 8.3.3 GA, the N+1 setup needs to be reconfigured to enable N+1, that is, the master controller should be deleted and then added to the slave controller.

This reconfiguration is not required when upgrading from 8.3.3 GA to 8.4.0.

  • This issue does not exist on controllers with manufacturing build as 8.3.3 GA.

 

You can choose any of the following options to upgrade:

  • Option 1 – Just like you would upgrade any controller, you can upgrade a N+1 controller.
    1. Upgrade master and then upgrade slave.
    2. After the upgrade, enable master on slave using the nplus1 enable

 

  • Option 2 – Upgrade slave and then upgrade master.

After the upgrade, enable master service on slave using the nplus1 enable command.

 

  • Option 3 – If there are multiple master controllers
    1. Upgrade all master controllers followed by slave controllers. After the upgrade, enable all master controllers on slave controllers using the nplus1 enable
    2. To enable master controller on slave controller, use the nplus1 enable
    3. Connect to all controllers using SSH or a serial cable.
    4. Use the show nplus1 command to verify if the slave and master controllers are in the

 

The output should display the following information:

Admin: Enable 

Switch: Yes 

Reason: ‐

SW Version: 8.3-1

 

  1. If the configuration does not display the above settings, use the nplus1 enable <master‐controller‐ip> command to complete the configuration.
  2. To add any missing master controller to the cluster, use the nplus1 add master

 

Restore Saved Configuration

After upgrading, restore the saved configuration.

  1. Copy the backup configuration back to the controller:

# copy ftp://<user>:<passswd>@<offbox-ip-address>/runningconfig.txt origconfig.txt

  1. Copy the saved configuration file to the running configuration file:

    # copy orig-config.txt running-config

  1. Save the running configuration to the start-up configuration:

   # copy running-config startup-config

 

Upgrading Virtual Controllers

Virtual Controllers can be upgraded the same way as the hardware controllers. See sections Upgrading via CLI, Upgrading via GUI, and Upgrading a N+1 Site.

Download the appropriate Virtual Controller image from Fortinet Customer Support website.  For more information on managing the virtual controllers, see the Virtual Wireless Controller Deployment Guide.

Upgrading the controller can be done in the following ways:

  • Using the FTP, TFTP, SCP, and SFTP protocols.
  • Navigate to Maintenance < File Management in the FortiWLC GUI to import the downloaded package.

The following are sample commands for upgrading the Virtual Controllers using any of these protocols.

  • upgrade-image tftp://10.xx.xx.xx:forti-x.x-xbuild-x-x86_64-rpm.tar both reboot
  • upgrade-image sftp://build@10.xx.xxx.xxx:/home/forti-x.x-xGAbuild-88-FWC1KDrpm.tar both reboot
  • upgrade-image scp://build@10.xx.xxx.xxx:/home /forti-x.x-xGAbuild-88-FWC1KDrpm.tar both reboot
  • upgrade-image ftp://anonymous@10.xx.xx.xx:forti-x.x-xbuild-x-x86_64-rpm.tar both reboot

 

The both option upgrades the Fortinet binaries (rpm) as well as the Kernel (iso), the apps option upgrades only the Fortinet binaries (rpm).

After upgrade, the Virtual Controller should maintain the System-id of the system, unless there were some changes in the fields that are used to generate the system-id. See the to the Licensing section for detailed information.

The International Virtual Controller can be installed, configured, licensed and upgraded the same way.

 

Upgrade Advisories

The following are upgrade advisories to consider before you begin upgrading your network.

NOTE:

Fortinet recommends upgrading a batch of maximum 100 APs.

Upgrading Virtual Controllers

In the upgrade command, select the options Apps or Both based on these requirements:

  • Apps: This option will only upgrade the Fortinet binaries (rpm).
  • Both: This option will upgrade Fortinet binaries as well as kernel (iso).

Upgrading FAP-U422EV

If the controller is running on pre-8.4.0 version and FAP-U422EV is deployed, follow these points:

  • Disable auto‐ap‐upgrade.

OR

  • It is advised not to plug in FAP-U422EV till the controller gets upgraded to 8.4.0.

Mesh Deployments

When attempting to upgrade a mesh deployment, you must start upgrading the mesh APs   individually, starting with the outermost APs and working inwards towards the gateway APs before upgrading the controller.

Feature Groups in Mesh profile

If APs that are part of a mesh profile are to be added to feature group, all APs of that mesh profile should be added to the same feature group. The Override Group Settings option in the Wireless Interface section in the Configuration > Wireless > Radio page must be enabled on the gateway AP.

Voice Scale Recommendations

The following voice scale settings are recommended if your deployment requires more than 3 concurrent calls to be handled per AP. The voice scale settings are enabled for an operating channel (per radio). When enabled, all APs or SSIDs operating in that channel enhances voice call service. To enable:

  1. In the WebUI, go to Configuration > Devices > System Settings > Scale Settings
  2. Enter a channel number in the Voice Scale Channel List field and click OK.

 

NOTE:

Enable the voice scale settings only if the channel is meant for voice deployment.  After enabling voice scale, the voice calls in that channel take priority over data traffic and this result in a noticeable reduction of throughput in data traffic.

 

 

 

New Features

This section describes the new hardware/software features introduced in this release of FortiWLC.

Fortinet Universal Access Points

The new Fortinet Universal Access Points (FAP-Us) are dual radio, dual band 802.11ac access points. These access points are designed to provide superior experience in data, voice, and video applications in enterprise class deployments.

 

FAP-U221EV and FAP-U223EV

The FAPs support two 2×2 MIMO radios (band locked) with a single core and comply with the IEEE 802.3af and 802.3at PoE specifications. A maximum of 8 ESS profiles and 128 clients are supported.

 

FAP-U24JEV

The FAPs support two 1×1 MIMO radios (band locked) with a single core and comply with the IEEE 802.3af and 802.3at PoE specifications. A maximum of 8 ESS profiles and 128 clients are supported.

The FAP has one 2×2 radio which will be always configured as two 1×1 interfaces.

 

NOTE:

FAP-U221EV, FAP-U223EV, and FAP-U24JEV do not support the following features:

  • MU-MIMO
  • LACP
  • 0 – Not supported in version 8.4.0 only.
  • Enterprise Mesh – Not supported on FAP-U24JEV only.
  • Application Visibility (DPI)

 

FAP-U422EV

The FAP is a Wave-2 access point and supports two 4×4 MIMO radios (band locked) with a dual core. This device complies with the 802.3at PoE specifications. A maximum of 16 ESS profiles are supported.

The FAP supports all FortiWLC functionalities same as the FAP-U42xEV.

 

For more information on the FAPs, see the corresponding Quick Start Guides.

 

 

Enhancements

These are the enhancements in this release of FortiWLC.

 

  • FAP-U422EV and AP832 are Passpoint R2 certified.
  • In FortiWLC 8.4.0, the DFS is enabled for FAP-U32xEV FCC & Japan, FAP-U22xEV CE & Japan and FAP-U24JEV CE.
  • The Simple Service Discovery Protocol (SSDP) is supported for Chromecast discovery.  DNS configuration option is supported for FortiGate discovery.

 

Additional Information

This section describes information related to the usage of FortiWLC.

 

  • Chromecast cast option is visible on the Youtube application only when the publisher or subscriber is in the tunneled mode.
  • The capture-packets command with -R filer captures all packets instead of filtered packets.

Clients and Encryption Keys

These are the maximum supported clients and encryption/decryption keys for FAP models.

 

FAP Models

 

Maximum supported clients/radios Encryption/Decryption
VCell Native Cell VCell Native Cell
ARRP

(Off)

ARRP

(On)

Hardware Software Hardware Software
FAP-U42x EV 170 170 256 170 0 256 0
FAP-U32x EV 170  170 256 170 0 256 0
FAP-U22x EV 128 128 128 64 64 64 64
FAP-U24J EV 128  128 128 64 64 64 64

 

 

VCell Roaming across Access Points

These are the supported VCell roaming details across APs.

 

Access 

Points

AP122 AP822 AP832 FAP-

U22xEV

FAP-

U32xEV

FAP-

U42xEV

FAP-

U24JEV

AP122  Supported Supported Supported with 2×2

MIMO mode

Supported with 2×2

MIMO mode

Supported with 2×2

MIMO mode

Supported with 2×2

MIMO mode

Supported with 1×1 mode
AP822 Supported Supported Supported with 2×2

MIMO mode

Supported Supported with 2×2

MIMO mode

Supported with 2×2

MIMO mode

Supported with 1×1 mode
AP832 Supported with 2×2

MIMO mode

Supported with 2×2

MIMO mode

Supported Supported with 2×2

MIMO mode

Supported Supported with 3×3

MIMO mode

Supported with 1×1 mode
FAP-

U22xEV

Supported with 2×2

MIMO mode

Supported Supported with 2×2

MIMO mode

Supported Not

Supported

Not

Supported

Supported with 1×1 mode
FAP-

U32xEV

Supported with 2×2

MIMO mode

Supported with 2×2

MIMO mode

Supported Not

Supported

Supported Supported with 3×3

MIMO mode

Not

Supported

FAP-

U42xEV

Supported with 2×2

MIMO mode

Supported with 2×2

MIMO mode

Supported with 3×3

MIMO mode

Not

Supported

Supported with 3×3

MIMO mode

Supported Not

Supported

FAP-

U24JEV

Supported with 1×1

MIMO mode

Supported with 1×1

MIMO mode

Supported with 1×1

MIMO mode

Supported with 1×1

MIMO mode

Not

Supported

Not

Supported

Supported

 

Fixed Issues

These are the fixed issues in this release of FortiWLC.

 

Bug ID Description
453607 SNMP results were incomplete for neighboring APs count.
462374 In tunnel mode, STA did not communicate with the wired network after controller fail over.
464122 No framed IP attribute in the accounting start packet.
464687 wncagent spikes while running the event view, GUI and CLI failed to expose the event history.
470393 STA did not receive packets from the wired network after controller fail over.
473365 OAP433 crashes with kernel panic.
448391 The Search/Filter option not available for port profiles in the feature group configuration page of the FortiWLC GUI.
446850 The conn ap command co nnected to a different AP.
449185 AP CommNodeId duplicated in multiple APs.
452055 AP reboots with false ** FATAL ** Dead lock detected error.
450379 Channel mismatch on some radios, with primary channel displayed as 44 and operating channel as 40.
457195 sys commands failed in the AP CLI.
455522 With Service Control enabled, the services crashed and restarted.
454144 Wncagent crashes after every one hour.
446296 AP sent Deauth to station by incorrect station type and unknown BSSID.
443669 An incorrect number of stations displayed in the pie charts on the system dashboard.
456464 Device connected but unable to pass traffic.
449409 Nplus1 was disabled when firmware was upgraded on FortiWLC-1000D,
452204 Random AP reboots with exception in APP visibility.
452650,

452649

FAP-U421EV did not auto-negotiate 1Gbps full duplex.
453317,

453316

Random AP832 crashes (NIP [c000d50c] e500_idle+0x90/0x94).
453511 Unable to configure DNS and domain name during the initial setup when the controller was on default setting.
457172 Controller based Captive Portal not working in the Bridged mode for AP822i.
457183 With IE9, incorrect page displayed for the Security Profiles Configuration.
460169 Channel mismatch on some radios, with primary channel displayed as 36 (Non- DFS channel) and operating channel as 100 (DFS Channel).
460587 Unable to edit ESS profiles from the web GUI.
461127 APs lost IP configuration after reboot and came up with default configuration.
446772 CP bypass page displayed even though the client is MAC authenticated and bypass enabled.
381008 Coordinator restarted due to memory issues
435490 All Chromecast devices did not show up in Youtube for casting.
423993 FAP-U421EV access points lost beacons in a virtual cell, causing clients to do  assoc-2-assoc.
409488 Error in copying from backup configuration to running configuration.
422065 Controller not sending the RADIUS accounting packet.
462414 When the secondary DNS Server was configured, the secondary NetBIOS server gets the same IP address as the secondary DNS server.
448985 When controller fails over, OUI configuration of client_locater is not taken over to the new active controller.
449154 When the client_locator is enabled and the controller fails over, client_locator is disabled on the new active controller.
470643 Nplus1 configuration fails after firmware upgrade from 8.3 on FortiWLC1000D.
470641 IP address on the slave controller is missing after firmware upgrade from 8.3 on FortiWLC-1000D.
466824 FAP-U321 upgrade fails.
469118 wncagent spikes observed.
470822 FAP-U421 reboots while unable to handle kernel null pointer – LR is at wlc_scbfindband+0x5c/0x130 [wl].
437223 The Console page in Chrome indicates that Adobe Flash is not installed even when it is installed in the Spectrum manager.
438782 Spectrum analysis: Overlay interference is misinterpreted as interference detected by the FAP.
436573 When upgrading from any prior release to 8.3.3, in N+1 configuration the passive slave controller Switch and Reason are No and No Config respectively.

This issue occurs on 64-bit Controller models/instances.

470640 Radio Tx Freeze on FAP-U421EV & FAP-U423EV.
351641 [OAP-832] Frequent leaf node reboots with the LOST CONTACT with controller error.
475059 The controller IP address is set to 0.0.0.0 in the VPN administration page post upgrade to 8.4.0.
475307 [FAP-U42x] Radios’ operating channel is different than the configured channel.
439721 High Latency and ping loss observed on clients configured in bridged mode with native and Static VLAN.

 

 

             

Known Issues

These are the known issues in this release of FortiWLC.

 

Bug ID Description Impact Workaround
450682 Random FAP-U421EV crashes with kernel panic. FAP reboots which impacts the client connectivity for the duration of AP boot up time.  
455780 In some MAC client devices authentication fails and the client is not able to connect.

This is due to the delay in processing EAPTLS messages.

This issue is specifically seen in MAC clients, due to the delay in EAP-TLS messages being processed by the AP, in some cases authentication fails because of which clients are not able to connect.  Set the authenticatio n timeout to 3 seconds.  For more information, contact the Customer Support.
461937 Sometimes, the FAP-U42x does not tag some packets on bridged data plane. Data loss on wireless devices. Connect the AP and run

sys perf off.

420129 Fujistu smart phones with AP822 rev2 randomly drop calls and then reconnect to the network. This is due to wrong beacon information. Glitches during voice calls. Install the 8.2.7 special

build. For more information, contact the Customer Support.

463646 Sometimes in the FAP-U units, in high multicast/broadcast traffic, performance issues and high latency are observed in the bridged mode. Latency in application usage for wireless clients. Disable

Multicast-to-

Unicast Conversion option.

442046 [AP832] Sometimes, the APs do not respond to port 5000, client connectivity affected. The AP reboots when this condition is encountered. In 8.4.0, the AP auto reboots when this condition

is encountered.

 

For root

 

      cause fix, contact the Customer Support for installing the relevant patch.
474057 [Virtual FortiWLC] In case of a fresh FortiWLC installation, the gateway does not recognize the services in the FortiWLC GUI.

In Monitor > Service Control > Service Details, the Service column is blank.

The Services pie chart in the Service Control Dashboard is not visible, unless the setup command is run or the controller is rebooted. Run the setup command

and reboot the controller.

474593 AP description with sh string gets lost post upgrade. The AP description is set to default (AP ID). Avoid using

sh string in AP description.

453518 Difference in the AP signal strength on the 5Ghz band while operating in the normal mode and in the site survey mode (country code set to UK). While doing site survey there will be a difference in signal strength if there is change in TX power other than values of 3 and 4.

 

Contact the

Customer Support for installing the relevant patch.

466751 Sometimes, the APs reboot in a loop when trying to add new APs or doing a bulk reboot. APs cannot discover the controller.  
462324 Sometimes, RADIUS requests are sent with the same port number for different IDs. TLS errors for the clients see at RADIUS end. No impact on connectivity.  
463626 Round trip delays are observed randomly

at wired side of AP822i after AP reboots.

Latency on wireless clients. In 8.4.0, reboot the AP.

 

For root cause fix, contact the Customer Support for installing the relevant

      patch.
463851,

448621

[FortiWLC-3000D/1000D] Sometimes in multiple upgrade scenarios spanning over releases, unable to add the master controller in slave controller in an N+1 setup.   Contact the Customer Support.
456513 Sometimes, AP832 connected to Cisco WS-C2960X-48FPD-L comes up as 802.3af and not 802.3at with the BLE dongle. BLE is disabled. Contact the

Customer Support for installing the relevant patch.

464308 APs Stuck in Disabled/Online state after reboot.

This issue is observed under scale deployments, for example, rebooting 100+ APs at the same time.

Client connectivity affected till the AP reboots. Reboot the AP.
464541 Wired Port profile in Mesh uplink port gets lost after upgrade to FortiWLC 8.4.0. Wired clients cannot access the network. Recreate the port interface for the AP.

 

Known Issues in FAP-U422/FAP-U24J/FAP-U22xEV

 

Bug ID Description Impact Workaround
451168 FAP-U24JEV/FAP-U22xEV- DTIM functionality is not working.

PS-Poll based power-save clients fail to receive multicast traffic when the Multicast-to-Unicast Conversion option is disabled in the ESS profile.

Power-save clients fail to receive the multicast traffic sometimes and the battery life of wireless device is drained. Enable the Multicastto-Unicast Conversion option [Default setting].
453903 FAP-U24JEV – Client mitigation fails when the Rogue AP detection feature enabled. Mitigation fails in cases of Rogue AP operating in foreign channel.  
474882 [FAP-U22x] Phy tx error with fatal error reinitializing and psm watchdog observed randomly on Radio 0/1 interface. Data loss is observed when the error is reported till it recovers.  

 

         

Common Vulnerabilities and Exposures

This release of FortiWLC is no longer vulnerable to the following:

 

Bug ID Vulnerability
450012 •      CVE-2017-1000251

•      CVE-2017-1000250

454662 •      CVE-2017-13077 to CVE-2017-13082

•      CVE-2017-13084

•      CVE-2017-13086 to CVE-2017-13088

461748 CVE20168491
443753 Broadcom ESDK vulnerability fix.

 

Visit https://fortiguard.com/psirt for more information.

FortiWLC CLI Concepts

$
0
0

CLI Concepts

Getting Started

To start using the Command Line Interface:

  1. Connect to the controller using the serial console or Ethernet port, or remotely with a telnet or SSH2 connection once the controller has been assigned an IP address.
  2. To assign the controller an IP address, refer to the “Initial Setup” chapter of the FortiWLC (SD) Getting Started Guide.
  3. At the login prompt, enter a user ID and password. By default, the guest and admin user IDs are configured.
    • If you log in as the user admin, with the admin password, you are automatically placed in privileged EXEC mode.
    • If you log in as the user guest, you are placed in user EXEC mode. From there, you must type the enable command and the password for user admin before you can enter privileged EXEC mode.
    • Start executing commands.

CLI Command Modes

The CLI is divided into different command modes, each with its own set of commands and in some modes, one or more submodes. Entering a question mark (?) at the system prompt or anywhere in the command provides a list of commands or options available at the current mode for the command.

User EXEC Mode

When you start a session on the controller, you begin in user mode, also called user EXEC mode. Only a subset of the commands are available in user EXEC mode. For example, most of the user EXEC commands are one-time and display-only commands, such as the show commands, which list the current configuration information, and the clear commands, which clear counters or interfaces. The user EXEC commands are not saved when the controller reboots.

  • Access method: Begin a session with the controller as the user guest.
  • Prompt: default>
  • Exit method: Enter either exit or
  • Summary: Use this mode to change console settings, obtain system information such as showing system settings and verifying network connectivity.
Privileged EXEC Mode

To access all the commands in the CLI, you need to be in privileged EXEC mode. You can either log in as admin, or enter the enable command at the user EXEC mode and provide the admin password to enter privileged EXEC mode. From this mode, you can enter any privileged EXEC command or enter Global Configuration mode.

  • Access method: Enter enable while in user EXEC mode, or log in as the user admin.
  • Prompt: default#
  • Exit method: Enter Summary: Use this mode to manage system files and perform some troubleshooting. Change the default password (from Global Configuration mode) to protect access to this mode.
Global Configuration Mode

You make changes to the running configuration by using the Global Configuration mode and its many submodes. Once you save the configuration, the settings are stored and restarted when the controller reboots.

From the Global Configuration mode, you can navigate to various submodes (or branches), to perform more specific configuration functions. Some configuration submodes are security, qosrules, vlan, and so forth. Description: configures parameters that apply to the controller as a whole.

  • Access method: Enter configure terminal while in privileged EXEC mode.
  • Prompt: controller(config)#
  • Exit method: enter exit, end, or press Ctrl-Z to return to privileged EXEC mode (one level back).
  • Summary: Use this mode to configure some system settings and to enter additional configuration submodes (security, qosrules, vlan).

FortiWLC Command Line-Only Commands

$
0
0

Command Line-Only Commands

Many CLI commands have an equivalent functionality in the Web Interface, so you can accomplish a task using either interface. The following lists commands that have no Web Interface functionality.

EXEC Mode Commands

  • configure terminal
  • no history
  • no prompt
  • no terminal length |width
  • help
  • cd
  • copy (including copy running-config startup-config, copy startup-config running-config and all local/remote copy)
  • delete flash: image
  • delete filename
  • dir [dirname]
  • debug
  • disable
  • enable
  • exit
  • quit
  • more (including more running-config, more log-file, more running-script)
  • prompt
  • rename

Command Line-Only Commands

  • terminal history|size|length|width
  • traceroute
  • show history
  • show running-config
  • show terminal

Config Mode Commands

  • do
  • ip username ftp|scp|sftp
  • ip password ftp|scp|sftp
  • show context

Commands that Invoke Applications or Scripts

  • calendar set
  • timezone set|menu
  • date
  • capture-packets
  • analyze-capture
  • debug
  • diagnostics[-controller]
  • ping
  • pwd
  • shutdown controller force
  • reload controller default
  • run
  • setup
  • upgrade
  • downgrade
  • poweroff
  • show calendar
  • show timezones
  • show file systems
  • show memory
  • show cpu-utilization
  • show processes

Command Line-Only Commands

  • show flash
  • show qosflows
  • show scripts
  • show station details
  • show syslog-host
  • show log
  • autochannel
  • rogue-ap log clear
  • telnet
  • syslog-host

FortiWLC Abbreviating Commands

$
0
0

Abbreviating Commands

You only have to enter enough characters for the CLI to recognize the command as unique. This example shows how to enter the show security command, with the command show abbreviated to sh:

Lab‐mc3200# sh security‐profile default

Security Profile Table

Security Profile Name : default

L2 Modes Allowed : clear

Data Encrypt : none

Primary RADIUS Profile Name :

Secondary RADIUS Profile Name :

WEP Key (Alphanumeric/Hexadecimal) : *****

Static WEP Key Index : 1

Re‐Key Period (seconds) : 0

Captive Portal : disabled

802.1X Network Initiation : off Tunnel Termination: PEAP, TTLS

Shared Key Authentication : off

Pre‐shared Key (Alphanumeric/Hexadecimal) : *****

Group Keying Interval (seconds) : 0

Key Rotation : disabled

Reauthentication : off

MAC Filtering : off

Firewall Capability : none

Firewall Filter ID :

Security Logging : off

Allow mentioned IP/Subnet to pass through Captive portal : 0.0.0.0

Subnet Mask for allowed IP/Subnet to pass through Captive portal : 0.0.0.0

FortiWLC Using No and Default Forms of Commands

$
0
0

Using No and Default Forms of Commands

Almost every configuration command has a no form. In general, use the no form to:

  1. Disable a feature or function.
  2. Reset a command to its default values.
  3. Reverse the action of a command.
  4. Use the command without the no form to reenable a disabled feature or to reverse the action of a no command.

Configuration commands can also have a default form. The default form of a command returns the command setting to its default. Most commands are disabled by default, so the default form is the same as the no form. However, some commands are enabled by default and have variables set to certain default values. In these cases, the default command enables the command and sets variables to their default values. The reference page for the command describes these conditions.

FortiWLC Getting Help

$
0
0

Getting Help

Entering a question mark (?) at the system prompt displays a list of commands for each command mode. When using context-sensitive help, the space (or lack of a space) before the question mark (?) is significant. To obtain a list of commands that begin with a particular character sequence, enter those characters followed immediately by the question mark (?). Do not include a space. This form of help is called word help, because it completes a word for you.

To list keywords or arguments, enter a question mark (?) in place of a keyword or argument. Include a space before the ?. This form of help is called command syntax help, because it reminds you which keywords or arguments are applicable based on the command, keywords, and arguments you already have entered.

TABLE 1: Examples of Help Commands

Command Purpose
(prompt)# help Displays a brief description of the help system.
(prompt) # abbreviated-command? Lists commands in the current mode that begin with a particular character string.
(prompt)# abbreviated-command<Tab> Completes a partial command name
(prompt)# ? Lists all commands available in command mode

Using No and Default Forms of Commands

TABLE 1: Examples of Help Commands

Command Purpose
(prompt)# command? Lists the available syntax options (arguments and keywords) for the command.
(prompt)# command keyword ? Lists the next available syntax for this command.

The prompt displayed depends on the configuration mode.

You can abbreviate commands and keywords to the number of characters that allow a unique abbreviation. For example, you can abbreviate the configure terminal command to config t.

Entering the help command will provide a description of the help system. This is available in any command mode.

FortiWLC Using Command History

$
0
0

Using Command History

The CLI provides a history of commands that you have entered during the session. This is useful in recalling long and complex commands, and for retyping commands with slightly different parameters. To use the command history feature, you can perform the following tasks:

  • Set the command history buffer size
  • Recall commands
  • Disable the command history feature
Setting the Command History Buffer Size

By default, the CLI records ten command lines in its history buffer. To set the number of command lines that the system will record during the current terminal session, and enable the command history feature, use the terminal history command: controller# terminal history [size n]

The terminal no history size command resets the number of lines saved in the history buffer to the default of ten lines or number specified by size.

To reset the history buffer size to its default (10), type default history: controller# default history

To display the contents of the history buffer, type terminal history

Using Command History

controller# terminal history

    7 interface Dot11Radio 1     8 end

  • interface Fast Ethernet controller 1 2
  • show interface Dot11Radio 1
  • end
  • show interfaces FastEthernet controller 1 2
  • sh alarm
  • sh sec
  • sh security
Recalling Commands

To recall commands from the history buffer, use one of the following commands or key combinations: Ctrl-P or Up Arrow key. This recalls commands in the history buffer, beginning with the most recent command. Repeat the key sequence to recall successively older commands. Ctrl-N or Down Arrow key. Returns to more recent commands in the history buffer after recalling commands with Ctrl-P or the Up Arrow key.

  • !number. Execute the command at the history list number. Use the terminal history or show history commands to list the history buffer, then use this command to re-execute the command listed by its sequence number.
  • To list the contents of the history buffer, use the show history command: controller# show history
Disabling the Command History Feature

The terminal history feature is automatically enabled. To disable it during the current terminal session, type no terminal history in either privileged or non-privileged EXEC mode: controller# no terminal history

Viewing all 2380 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>