Create ULL Compute Engine instances
This page describes how to create Ultra Low Latency (ULL) Compute Engine instances by using U4P or U4C machine types from the U4 machine family.
For an overview of the ULL infrastructure configuration process, see Configuration overview for ULL Solution.
Before you begin
Before you create ULL compute instances, see the following sections.
Create VPC networks
If you haven't already, create VPC networks for each of your instance's network interfaces as described in Configuration overview for ULL Solution.
Create a placement policy
You can optionally apply a spread placement policy to your ULL instance for increased resiliency. For more information, see Create and apply spread placement policies in the Compute Engine documentation.
Set your project
Set the gcloud CLI to use your project. Alternatively you can include
the --project=PROJECT_ID flag for each command
in the following procedures.
gcloud config set project PROJECT_ID
Replace PROJECT_ID with the ID of your
project.
Required roles
To get the permissions that you need to create Compute Engine instances , ask your administrator to grant you the following IAM roles:
-
To create and manage compute instances:
Compute Instance Admin (
compute.instanceAdmin) on your project
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
ULL instance configuration overview
To create a ULL compute instance and configure the instance to send or receive ULL unicast and multicast traffic, do the following:
| Step | Description |
|---|---|
| Create a ULL instance | Create an instance using a U4P or U4C machine type and connect its network interfaces to a general-purpose VPC network and a ULL VPC network. If you plan to use the instance as a multicast consumer, set the igmp-query flag to IGMP_QUERY_V2. |
Configure routing for the non-nic0 interfaces for ULL unicast traffic |
Configure source-based policy routing in the guest OS to ensure that egress packets leave through the correct interface and prevent asymmetric routing. |
| Configure an instance to be a ULL multicast consumer |
Complete each of the following to receive multicast traffic
on ULL network interfaces:
|
Additionally, you can use the example commands on this page for testing multicast connectivity.
Create a ULL instance
This section describes how to create a new instance to use for ULL unicast and multicast.
For general information about creating compute instances, including additional configuration options, see Create and start a Compute Engine instance.
gcloud
To create a ULL instance, use the
compute instances create command.
For the network interfaces that attach to a ULL VPC
network, no-address must be specified.
Additionally, the following command includes the igmp-query flag to
enable the instance to be a multicast consumer. This flag isn't required
if the instance is a multicast producer only, or if the instance sends
and receives unicast only.
gcloud compute instances create INSTANCE_NAME \
--zone=ZONE \
--machine-type=MACHINE_TYPE \
--image-project=IMAGE_PROJECT \
--image=IMAGE_NAME \
--maintenance-policy=TERMINATE \
--network-interface=nic-type=NIC_TYPE,queue-count=QUEUES,network=GENERAL_PURPOSE_VPC_NETWORK,subnet=GENERAL_PURPOSE_SUBNET \
--network-interface=nic-type=NIC_TYPE,queue-count=QUEUES,network=ULL_VPC_NETWORK,subnet=ULL_SUBNET_1,no-address,igmp-query=IGMP_QUERY_V2 \
--network-interface=nic-type=NIC_TYPE,queue-count=QUEUES,network=ULL_VPC_NETWORK,subnet=ULL_SUBNET_2,no-address,igmp-query=IGMP_QUERY_V2
Replace the following values:
ZONE: the zone in which to create the instanceINSTANCE_NAME: a name for the instanceMACHINE_TYPE: the U4P or U4C machine type of the instanceIMAGE_PROJECT: the image project. For testing during Preview, use the image project provided by Google. See Operating system support for U4 machine types.IMAGE_NAME: the image name. For testing during Preview, use the image provided by Google. See Operating system support for U4 machine types.NIC_TYPE: the network interface type to use. Use the supported network interface type for the specific zone in which you are creating your instance:- For
us-south1-d, specifyIDPF. - For
us-south1-e, specifyGVNIC.
- For
QUEUES: the number of receive and transmit queues for processing packets from the network:- For
GVNIC, you must include thequeue-countfield and specify a value of32for XDP support. - For
IDPF, omitqueue-count=QUEUESfrom the command.
- For
GENERAL_PURPOSE_VPC_NETWORK,GENERAL_PURPOSE_SUBNET: the VPC network and subnet to attach thenic0interface of the instance toULL_VPC_NETWORK: the ULL VPC network to attach the non-nic0interfaces toULL_SUBNET_1: the subnet in the ULL VPC network to attach thenic1interface toULL_SUBNET_2: the subnet in the ULL VPC network to attach thenic2interface to
Configure routing for the non-nic0 interfaces for ULL unicast traffic
By default, an instance uses the default route associated with its nic0 interface
to send traffic to any destination outside of its directly connected subnet. For
more information, see the multiple network interfaces overview.
For your instance's nic1 and nic2 interfaces to
successfully send and receive ULL unicast traffic, you must configure
source-based policy routing in the guest OS. This configuration ensures that
egress packets leave through the correct interface and prevents asymmetric
routing, where traffic enters one interface but attempts to exit through nic0.
For an example of how to configure policy routing, see Configure policy routing in the Configure routing for an additional network interface tutorial.
Configure an instance to be a ULL multicast consumer
This section describes how to configure an existing ULL instance to be a multicast consumer.
Enable IGMP query on an existing ULL instance
If you didn't enable IGMP when creating your instance, you can enable it on your existing instance as described in this section.
gcloud
To enable IGMP query on an existing ULL instance, use the
compute instances network-interfaces update command.
Repeat the following command for each network interface that you want to receive multicast traffic.
gcloud compute instances network-interfaces update INSTANCE_NAME \
--zone=ZONE \
--network-interface=NETWORK_INTERFACE_NAME \
--igmp-query=IGMP_QUERY_V2
Replace the following values:
INSTANCE_NAME: the name of the instanceZONE: the zone of the instanceNETWORK_INTERFACE_NAME: the name of the network interface on which to enable IGMP query. In Google Cloud, the format isnicNUMBER, such asnic0,nic1, ornic2.
Set IGMPv2 in the guest OS
To set IGMPv2 in the guest OS of your instance, do the following:
Connect to the instance by using SSH.
Run the following command and identify the device names of the network interfaces that you want to receive multicast traffic.
sudo ifconfig
For each applicable network interface, run the following command to force IGMPv2.
sudo -i echo "2" > /proc/sys/net/ipv4/conf/NETWORK_INTERFACE_DEVICE_NAME/force_igmp_version
Replace
NETWORK_INTERFACE_DEVICE_NAMEwith the device name of the network interface, such aseth0,eth1, oreth2.
Configure reverse path filtering (rp_filter)
In some cases, such as with multi-NIC instances, strict source validation by
reverse path filtering (rp-filter) can cause legitimate multicast packets to
be dropped. To prevent this from happening,
you can configure reverse path filtering to loosen or disable source validation
on non-nic0 network interfaces that receive multicast traffic.
For example, the following commands configure reverse path filtering to disable
source validation on eth1 and eth2 by setting rp_filter to 0.
sudo sysctl -w net.ipv4.conf.all.rp_filter=0 sudo sysctl -w net.ipv4.conf.eth1.rp_filter=0 sudo sysctl -w net.ipv4.conf.eth2.rp_filter=0
For more information about rp_filter, see
IP Sysctl in the Linux
kernel documentation.
Increase the ring buffer size of the network driver
This section describes how to increase the ring buffer size of the network driver on your instance.
For multicast consumers that receive high levels of traffic, increase the ring
buffer size of the network driver to a value of
2048 to help avoid packet loss. Do this for each network interface that
receives multicast traffic.
See the following example commands, which configure nic1 and nic2:
sudo ethtool -G eth1 rx 2048 tx 2048 sudo ethtool -G eth2 rx 2048 tx 2048
Example commands for testing ULL multicast connectivity
This section provides example commands for testing multicast traffic without
starting an application workload by using the
iperf tool. The steps in this section
require that the multicast configuration described in Configuration overview for ULL Solution is
complete.
Send ULL multicast traffic from a multicast producer
Connect to the instance by using SSH.
Install
iperfif you haven't already.iperf3doesn't support multicast. The following command usesiperf, which installsiperf2.sudo yum install iperf
To send multicast traffic to the multicast group IP address, run the following command:
iperf -c MULTICAST_GROUP_ADDRESS%NIC -p 1234 -l 512 -i 1 -u -b 1000pps -t 999999 -B NIC_IP_ADDRESS
Replace the following values:
MULTICAST_GROUP_ADDRESS: the multicast group IP addressNICthe network interface device name, such aseth1oreth2NIC_IP_ADDRESSthe IP address that is assigned to the NIC that you specified
Join a group and receive ULL multicast traffic from a multicast consumer
Connect to the instance by using SSH.
Install
iperfif you haven't already.sudo yum install iperf
To join a multicast group and log the traffic that you receive, run the following command:
iperf -s -p 1234 -B MULTICAST_GROUP_ADDRESS%NIC -l 512 -u -i 1
Replace the following values:
MULTICAST_GROUP_ADDRESSthe multicast group IP addressNICthe network interface device name, such aseth1oreth2
For example, the following command joins a group with the IP address
224.1.0.176to receive packets up to 512 bytes and logs the traffic received:iperf -s -p 1234 -B 224.1.0.176%eth1 -l 512 -u -i 1
What's next
- To enable busy polling and test Onload features for ULL compute instances, see Work with Onload.
- To synchronize your instance system clock to the physical NIC clock of its host server, see Configure accurate time.