Follow along for an enriching discussion with our CEO, Peter Herweck, and industry leaders for a captivating Global Keynote at #InnovationSummit Paris 2024. Watch recording and join the #ImpactMaker movement!️
Send a co-worker an invite to the portal.Just enter their email address and we'll connect them to register. After joining, they will belong to the same company.
You have entered an invalid email address. Please re-enter the email address.
This co-worker has already been invited to the Exchange portal. Please invite another co-worker.
Please enter email address
Send InviteCancel
Invitation Sent
Your invitation was sent.Thanks for sharing Exchange with your co-worker.
The EcoStruxure IT Data Center Expert Virtual Appliance help fully describes deploying the Data Center Expert virtual appliance, and the upgrade and migration processes. It is available as a separate document on the APC web site.
Download the 5 node trial of the Data Center Expert Virtual machine
The Data Center Expert server is available as a virtual appliance, supported on VMware ESXi 7.0.3. The full-featured demo of the virtual appliance monitors up to five device nodes and one surveillance node.
You can purchase a license key to upgrade to a production version to monitor additional device nodes and activate supported applications, or to migrate from a server hardware version to a virtual appliance.
Note: In Data Center Expert 8.1.0 and newer, the Data Center Expert Virtual Appliance OVA is built on VMware ESXi 6.7. Only E1000E and VMXNET3 network adapters are supported.
VMware ESXi 4.1.0 was used as the reference virtualization platform in previous versions and during the development of the Data Center Expert 7.2.x virtual appliance, and is the first supported virtualization platform.
Although it may function properly on any virtualization platform that supports this format or has an appropriate converter utility, only VMware ESXi has been tested and is supported.
To use the full-featured demo version of the Data Center Expert virtual appliance, you download the *.ova file from the APC web site, and deploy it to your virtualization platform using the default hardware configuration. For more information, see Hardware resource configuration guidelines and Data Center Expert virtual appliance equivalent configurations.
The demo version monitors a maximum of five device nodes and one surveillance node by default.
To monitor 25 device nodes, add license keys to monitor additional nodes, or activate supported applications, you must upgrade the Data Center Expert virtual appliance demo to the production version.
Note: You can add an additional network adapter to enable private networking, or add additional hard disks to increase storage, after the OVA template is deployed.
To upgrade the demo to the production version, you must purchase and apply an activation key, available on the APC web site.
To monitor additional device nodes or activate supported applications, you must purchase and apply node license keys and application license keys for the virtual appliance.
To migrate a Data Center Expert hardware server to a virtual appliance, you must purchase and apply an activation key, and contact APC Support for new node license keys and application license keys for the virtual appliance.
To receive these keys, you are required to provide a unique MAC address and serial number for the Data Center Expert virtual appliance, and for the Data Center Expert hardware server you are replacing.
A unique serial number is generated for the Data Center Expert virtual appliance at startup. It is displayed in the "About Data Center Expert" display, accessed from the Help menu.
Note: The serial number for a Data Center Expert hardware server appears only on its serial number sticker.
Starting with Data Center Expert 8.1.0, VMware ESXi 6.7 is used as the reference virtualization platform for the Data Center Expert virtual appliance.
In Data Center Expert 8.1.0 and higher, the OVA is created using ESXi 6.7. Only E1000E and VMXNET3 network adapters are supported.
These instructions are for DCE versions 8.0.0 and older only
VMware ESXi 4.1.0 was used as the reference virtualization platform in DCE versions 8.0.0 and older.
The Data Center Expert 8.0.0. and older OVAs were created using ESXi 4.0. It supports version 4.1 and can run on versions up to 7.x but may have decreased capabilities on versions above 4.x.
You can upgrade the VM compatibility with later versions in the VMware UI, particularly to increase the number of CPUs extended to a DCE 8.0.0 and older VM.
When DCE version 8.0.0 and older is deployed to a ESXi 7.0 host, CPUs are limited to 8. After upgrading the VM compatibility, you can increase the CPUs based on the capabilities of the ESXi host.
Power off the VM in vSphere.
Right click the VM and select Compatibility > Upgrade VM Compatibility.
8771378078493_8771453792029.png
A warning is displayed about reverse compatibility that advises you to make a backup. When you are ready, click YES.
8771400567581_8771453792029.png
Select the compatibility version for the VM upgrade. Choose the version that matches your ESXi environment. Click OK.
8771378123549_8771453792029.png
Your VM is now compatibility with the ESXi version you selected, and you can increase the number of CPUs extended to your DCE VM.
8771378122525_8771453792029.png
This guide provides a starting point for sizing a new DCE solution.
If you set up DCE on a virtual machine, keep in mind that virtual environments run in a dynamic environment and need to be monitored and evaluated constantly. Adjustments are usually needed to ensure successful system performance.
For additional details on how to monitor and inspect the performance health of your DCE server, see the DCE performance troubleshooting guide.
Sensor updates calculation
When you size a DCE server, it is important to know the number of sensor updates per hour that the system will track. Sensor updates per hour is the number of sensors with values that change during one or more poll intervals over the course of an hour.
Calculating this value is not as simple as total number of sensors multiplied by the number of sensors in the system. Many devices have several sensors that do not change very often. If you calculate the rate of sensor change using the total sensor count, the result will be incorrectly inflated.
There are a few ways to more accurately approximate this value. To give you a starting point, we collected data from DCE systems that are connected to EcoStruxure and generated a data set from the connected devices. The average sensor quantity and the average rate of change per hour by device make and model are listed in the sensor updates calculator. You can use these reference values to calculate your sensor update per hour values.
For cases where you can’t use the reference document, you can use a small DCE deployment to measure the data, preferably with live devices deployed in their intended environment. Deploy a small DCE configuration and discover the device type in question. Then go to http://<dce server ip>/nbc/compress/support/sensorqstats. This page updates hourly and shows the amount of “processed” sensors for that hour. To get an accurate measurement, let the test run for a few hours, and then use the reported number as your update rate. Repeat this for the devices you want to profile.
Once you have all the data for each of your devices, add up the values. Remember, you want to calculate the sensor update quantity per hour for the entire system. Make sure you include virtual sensors if they will be used in the environment. Make sure you take your desired poll rate into account. You can drastically change your sensor update per hour value when you modify the poll period.
CPU and RAM sizing
Use the following characteristics to evaluate the CPU and RAM requirements for DCE server:
Device count
Sensor count
Sensor updates per hour
If your requirements are lower than all the listed requirements for a configuration size, use the CPU and RAM suggested values. If any of the three values are above the listed requirement, use the next largest configuration size.
Since sensor count and sensor updates per hour are often difficult to know before deployment, you can use the sensor updates calculator to see sample data gathered from DCEs deployed in the field. The guide contains average sensor count and average sensor updates per hour for some popular devices that DCE supports to help you determine estimated values.
It is important to note that operating with SNMPv3 comes with increased overhead and decreases the number of devices and sensors that a given DCE instance can monitor. The limits in the tables below assume using a specific protocol to manage the entire population (SNMPv1, Modbus, or SNMPv3). Mixed environments may see different upper limits for devices and sensors. These findings are based on testing against APC and Schneider Electric devices.
Basic Server Load
4 CPU / 4 GB of RAM
SNMPv1/Modbus
SNMPv3
Device limits
500
125
Sensor updates per hour
45,000
11,250
Total sensors
67,000
16,750
Standard Server Load
8 CPU / 8 GB of RAM
SNMPv1/Modbus
SNMPv3
Device limits
2000
500
Sensor updates per hour
180,000
45,000
Total sensors
270,000
67,500
Enterprise Server Load
16 CPU / 16GB of RAM
SNMPv1/Modbus
SNMPv3
Device limits
4000
1000
Sensor updates per hour
360,000
90,000
Total sensors
540,000
135,000
For configurations that go above the limits for the Enterprise server load, consider splitting up the device load across multiple DCE servers. You can also contact technical support to review specific sizing requirements on a case by case basis.
Additional DCE variables that impact CPU and RAM sizing
There are other components in DCE to consider when you plan CPU and RAM sizing for your DCE virtual machine. These parameters vary widely, so exact guidance cannot be provided for all cases. The following activities and parameters have a direct impact on DCE performance. Depending on the extent of their use, additional modifications to the CPU and RAM may be needed.
Thresholds
Virtual Sensors
API Integrations (DCO, Web Services, etc.)
Number of users logging into or logged into DCE thick client at same time
Graphing and reporting usage
Surveillance
Discovering a large number of devices in a short period of time
If you plan to use these features, or you are concerned about their impact on system performance, you can review the details in the DCE performance troubleshooting guide.
Virtualization considerations for CPU and RAM sizing
All the sizing guidance provided assumes dedicated resources provisioned exclusively for the DCE virtual machine. In practice, unless you are using dedicated ESXi for each DCE VM, it is likely that your DCE virtual machine will share a pool of resources with other virtual machines. The load of other virtual machines being serviced by the same CPU and RAM resources has the potential to directly impact the performance of your DCE. This is especially true when you start to overprovision CPU and RAM resources, which can then lead to increased latency and resource contention.
To better understand the health of your DCE virtual machine in its virtualization environment, use the resources in the DCE performance troubleshooting guide to analyze your system’s performance in real time and adjust the system accordingly.
Storage sizing
Successful deployment and operation of DCE relies on appropriately provisioned storage. There are two main components to appropriately size storage:
Performance
Disk capacity
Storage performance
The DCE virtual machine performs a write-heavy workload with high volumes of small I/O operations and is extremely sensitive to disk latency.
DCE latency events often appear as dropped sensor changes. This is reported by DCE in the nbc.xml log and is a good indication of storage contention issues. When making decisions about storage, choose a storage solution that is optimized for:
IO workloads that are 90% or more write-centric
Writes that are mainly 1k block aligned
Supporting <1ms latency for all read / write operations
With the above I/O pattern and latency requirements accounted for, the number of sensor updates per hour again comes into play to size the disk throughput appropriately. Use the following as guidance for how much storage throughput will be required. It is strongly encouraged that ALL of the following configurations use SSD drives.
Basic Server
Up to 45,000 sensor updates per hour
Requires 2MB/sec sustained write throughput
Standard Server
Up to 180,000 sensor updates per hour
Requires 8MB/sec sustained write throughput
Enterprise Server
Up to 360,000 sensor updates an hour
Requires 16MB/sec sustained write throughput
Storage caching of 1GB or larger
For configurations that go above the limits outlined in the Enterprise server load section, consider splitting up the device load across multiple DCE servers. You can contact technical support to review specific sizing requirements on a case by case basis.
Storage capacity
This is the recommended disk capacity deployment strategy:
Deploy the DCE OVA and DO NOT adjust the size of the initial hard drive.
Add a second drive to the virtual machine with a capacity of 250GB.
Monitor the Storage Repository usage in the DCE thick client and use the purge notification settings to alert you when data retention is nearing current capacity.
Add additional disks (never resize existing) to the DCE virtual machine in 250GB increments to meet data retention needs.
Additional DCE variables that impact storage performance
There are several DCE activities to take into consideration when you are sizing the DCE server. There are too many permutations of these variables to provide guidance on all of them. You can reference the DCE performance troubleshooting guide to better understand how to measure and tune this aspect of your system.
Variable activities that affect storage performance:
Other virtual machines using the network storage
Network latency
Disk latency
CPU latency
Virtual sensors
API Integrations (DCO, Web Services, etc.)
Number of users logging into or logged into the DCE thick client at same time
Graphing and reporting usage
Backup/Restore
Data purging
Surveillance
Discovering a large quantity of devices in a short period of time
See Surveillance deployments and Data Center Expert server performance
This guide covers some of the symptoms you might see when a DCE server has a performance bottleneck, tools you can use to better understand the resource that is being strained, and steps to try to mitigate the issue.
Symptoms of performance problems
There are several symptoms you might see while you interact with DCE that could suggest that there is a potential performance problem in your environment. This list is by no means exhaustive. Some common symptoms are:
Missed sensor update values
The nbc.xml log from the DCE server contains ERROR level messages about dropped sensor updates coming from com.apc.isxc.vb.listeners.sensor.impl.SensorQProcessorRunnable or com.netbotz.server.services.repository.impl.RepositoryEventServiceImpl
Log in to the DCE web client and click Logs in the upper right corner to view the nbc.xml log.
Delay in receiving alarm data
Alarms come into the system significantly after they were triggered on the monitored device.
Server hang, crash, or timeouts
This error message is displayed on the DCE server: Hung_task_timeout error
Contact technical support to gather capture server logs.
See https://www.apc.com/us/en/faqs/index?page=content&id=FA303596
Performance analysis from DCE
Top
Top is a standard Linux diagnostic tool used to monitor system performance. Direct access to the DCE server is not allowed. Contact technical support to capture server logs that include a top_output.
Note: Prior to DCE 7.7, the top output from captured server logs is averaged across all CPU cores. Starting with 7.7, the output per core is available, which is more insightful.
Within the top output, support looks at a few different values:
CPU load average
This lists a load average for the last one, five, and fifteen-minute period. If this number is abnormally high relative to the number of cores you have defined for the system, it is a good indicator that the system is running with a lot of CPU load. The exact cause of the CPU load won’t be clear from this data. The system could be CPU starved if this value remains high for an extended period of time.
It is expected that this value is elevated for some period of time after a system reboot or during a large discovery. You divide this value by the number of cores, and then multiply by 100 to get a percent utilization. Each physical core counts as 1; a hyperthreaded core counts as ½.
For example, an 8 core / 8 thread virtual machine should be able to sustain a load average of 8.0 without being considered oversubscribed. If you are using an 8 core, 16 thread configuration, your acceptable load average is more like 12.0 because not all 16 threads are backed by physical cores.
Mitigation in this case consists of either reducing the load on your DCE (fewer devices, longer poll period) or allocating more CPU resources to your DCE to get your load average to a more acceptable level. Make sure to review the DCE sizing guide for insight on the best starting values for CPU configuration to use based on the system workload.
Wait average
This represents the amount of time your system is stopped waiting for the underlying storage device to service requests. DCE is extremely sensitive to IO path delays, so even a slightly elevated value for wait average that persists for any extended period of time can be an issue for the system.
Ideally you want to see this value listed by CPU core. If you see any one individual core with a %wa continually over 20, your storage is likely not keeping up with DCE. If the system is allowed to stay in this state for an extended period of time, you usually start to see the missed sensor update processing symptom listed above. Bear in mind that if you are reviewing the average top output instead of by core, this value can deceptively appear to be much lower due to averaging and the number of cores in the system.
Mitigation requires a deeper dive into your storage path. If you are using network storage, you want to review the latency and utilization of the storage array. If using local ESXi storage, you can review the Host performance data in VMWare. Usually, either reducing the load on the DCE by decreasing device count or increasing poll period will help. If the storage is truly subpar, upgrading to SSDs, removing other load from the storage system, or improving the network path between the DCE and storage may be required. Reference the DCE sizing guide for more details on appropriate storage sizing.
SensorQstats
This is a statistic that the DCE server keeps track of. It represents the amount of sensor processing the server is doing every hour. This value can be monitored at:
http://<dce server ip>/nbc/compress/support/sensorqstats
The dataset can also be retrieved by technical support with a capture server logs gather request.
Regardless of where you view the data, this statistic will publish once an hour every hour. This metric is good to monitor because it shows whether the DCE is keeping up with the current work load or if it is falling behind. These values are of particular interest:
Processed
This is the number of sensor updates that the server has processed in the last hour. This value is directly impacted by the number of devices in your system, your poll period, and the number of sensor changes that are occurring.
This value represents the total number of unique events that the system completed within that 1-hour period of time. It is best observed during steady system processing. Events like discovering a large quantity of new devices can skew this number for a period or two. Use this value when you review the DCE sizing guide to determine CPU / RAM / Storage sizing.
Dropped
This value should always be zero on a healthy system. Any non-zero value indicates a sensor data point that was dropped because a component of the system cannot keep up. When this value is not zero, we often see %wa elevated in top output.
Remember, DCE is very intolerant of storage latency. If the value for dropped is a recurring non-zero value, some amount of data is constantly lost. If there is a non-zero value occasionally, look into the system during those times; it is likely running near the edge of its capabilities and is pushed beyond its limits. Events such as a large alarm storm, a discovery pulling in a large number of devices, or similar high load events can all push the system temporarily into this state.
A properly configured system should always have zero drops. Anything dropped will be lost forever, so it’s important to monitor and adjust resources accordingly to prevent this.
Remaining
This value represents the amount of sensor data still in the queue to be processed when the qstats report was run. This is not dropped data; it is data that had not yet finished being processed. On smaller systems, this will likely always be zero. As the workload increases, this value could start to become non-zero.
By itself, having some non-zero values here is not cause for alarm. If you are regularly seeing non-zero values, or the value is growing in size every hour, it’s a sign that the system is starting to have trouble keeping up.
Performance analysis from DCE VM
The primary focus of this section is specific to DCE run as a virtual machine. Information that is not VMware-centric technically applies to DCE physical servers also.
Sometimes, there are delays for reasons not readily seen from the DCE or DCE OS point of view. In these cases, it helps to review the performance data from the VMware side of things to see if there are any performance issues.
Resource Limits
It is good to verify whether any resource limits are defined for the DCE virtual machine. Resource limits are a throttling mechanism that allow a VM administrator to restrict the amount of resources a virtual machine can consume. These limits can be imposed on CPU, RAM, and storage resources, effectively restricting the virtual machines use of these resources.
If there are resource limits in place, try removing them or raising them to a higher value. Monitor the system utilization values as a result of the change to monitor for improvements.
Disk Latency
From within VMWare you can monitor the real-time disk performance results of the storage that is backing the DCE VM. The specifics of finding this data differs a bit between versions of VMWare, and whether you investigate from the ESXi locally or from within vCenter. All the versions have support for monitoring the disk latency.
To start, identify the DCE VM from within the hypervisor and review the details of the virtual machine. Specifically, look for the disk drive(s) of the DCE, and the storage backing that drive. If your DCE has more than one disk drive, they should ALL be located on the same storage destination. Splitting DCE drives among multiple storage backings almost always results in decreased VM performance and should be avoided as a general rule.
Look for the Advanced Performance Monitoring section of the ESXi host running the DCE VM. In that section, you can view the real time latency of all IO operations that host is sending to disk. The DCE is very sensitive to disk latency. You should ensure that the latency value, in ms, is less than 1 for the datastore backing the DCE VM. While some short-lived spikes can be tolerated, it is best to ensure the steady state and average response time remains below 1ms.
If response times exceed of 1ms, look for ways to lower that value. You can reduce the amount of systems that also use that shared volume, isolate the DCE VM to be the only system using that volume, or upgrade the target volume to have more disks, faster disks, or preferably SSDs.
Esxtop
Drilling into another level of the hypervisor, you can run esxtop, a real-time performance analysis tool provided by VMWare. This utility is very similar to Linux top, and its usage is the same.
To start, SSH must be enabled on the ESXi running your DCE VM, and you must have the proper credentials to SSH into the ESXi. This is a real time analysis, so the information gathered will only be applicable if your DCE is in the performance degraded state while you run this tool. For intermittent issues, you should run this tool and then cause the event that triggers the degraded system state.
As an example, the following steps cover how to perform a 30-minute esxtop capture from the ESXi. There is additional documentation about running esxtop interactively in Additional resources below.
To capture a 30-minute data set from esxtop:
Enable SSH
SSH to the ESXi server hosting the DCE VM.
Run the command: esxtop -b -d 5 -n 360 -a |gzip >esxtopOutput.csv.gz The esxtop command should monitor the ESXi for 30 minutes and create a report of all the performance counters.
After the collection completes, SCP the file from the ESXi host and put the output on a Windows machine.
From the Windows machine, use Performance Monitor to analyze the data set collected. To do this:
Launch Performance Monitor.
From the left navigation, under Monitoring Tools, right click Performance Monitor and choose Properties.
Under the Source tab, change Data Source to Log Files and point it to the extracted contents you gathered from running esxtop.
Click OK.
Right click the graph and choose Add Counters.
You can now choose which data from the log collection you want to graph to determine signs of stress from typical system resources: CPU, RAM, drives. Some values of interest are:
CPU %Used of the DCE VM
Returns data similar to the Linux top data collected before, just another way to reference it
CPU load average of the host
Like with top, gives you CPU insight into how much load the ESXi CPU is under
%VMwait
Percentage of time the VM is waiting for kernel activity, usually disk IO
DAVG / KAVG / GAVG
Stats for latency of disk commands.
DAVG : Latency at the device driver level
KAVG: Latency at the VMKernel level
GAVG: GAVG= DAVG + KAVG
For additional analysis of the esxtop data, see Additional resources.
Additional resources
Additional resources to help you better understand some of the performance tools, what they mean, and how to use them
ESXTOP
ESXTOP quick overview
http://www.running-system.com/wp-content/uploads/2015/04/ESXTOP_vSphere6.pdf
ESXTOP metrics
https://www.virten.net/vmware/esxtop/
ESXTOP interpretation
https://communities.vmware.com/docs/DOC-9279
VMWare
VMWare KB: Troubleshooting ESXi virtual machine performance issues
https://kb.vmware.com/s/article/2001003
VMWare KB: Troubleshooting ESXi storage performance issues
https://kb.vmware.com/s/article/1008205
Questionnaire for performance escalation
Use these questions as a starting point for data that you should gather from the site if you suspect a DCE VM performance issue. If you open a case to diagnose this problem, technical support and engineering will request this data. Proactively gathering the data will help expedite issue resolution.
The questions are written to gain a better understanding of the environment hosting the DCE virtual machine. The goal is to understand the capabilities of the ESXi, the storage supporting DCE, and resource utilization.
VMWare Configuration
What version of VMware are you running on vCenter?
What version VMware are you running on the ESXi?
What VM Hardware version do you have applied to the DCE VM?
Are your ESXi hosts in a cluster supporting vMotion of your VMs for load balancing?
How many hosts are in the cluster?
Is DRS enabled such that VMs can migrate between ESXi?
How often is your DCE VM migrating?
Which ESXis are hosting the DCE VMs in question? (if multiple ESXis, please list)
What is the make, model, and hardware specs of the ESXi server?
Specifically interested in CPU type and quantity, RAM quantity
Are your DCE VMs configured with multiple drives?
If yes, are all the drives located on the same storage location?
Are there any resource limit restrictions being set on your DCE VM?
CPU Limits? CPU Shares?
Memory Limits? Memory Shares?
For all DCE VM disk drives: Disk Shares? Disk IOPs Limit?
ESXi Local Storage
Are the ESXis using local storage to run any of the VMs? If no, skip this section.
If VMs are using local ESXi storage, what are the ESXi disk types (HDD / SSD)?
If HDD what are the RPM speed of disk?
If multiple disks are being used, what is the RAID scheme?
What is the size of your RAID Controller Cache?
Network Storage
If your DCE VM is leveraging network storage for their disk backing:
What are the make and model of the shared storage disk array?
What protocol is your network storage running (NFS / VMFS / SCSI )?
How many disks are there in the storage solution?
What are the drive types? SSD? HDD?
If HDD, what are the disk speeds?
How is the array provisioned (Single disk pool, multiple disk pools)?
If multiple pools, how many disks per pool?
What is the RAID configuration on the volume?
Is the DCE VM using an isolated volume or is it shared with other VMs?
Network Topology
Please describe the network topology where this DCE is deployed. Link speeds between nodes of the system are of specific interest.
Running System Data Collection
While running your typical DCE workload, use the esxtop tool to collect a snapshot of your system. Ideally, the collection should cover the period of time where you are experiencing the performance issue.
Esxtop collection
Enable SSH and SSH to the ESXi server hosting DCE.
Run the command: esxtop -b -d 5 -n 360 -a |gzip >esxtopOutput.csv.gz
Monitor the ESXi for 30 minutes and create a report of all the performance counters.
SCP the output from the ESXi and send to support.
After you have deployed the OVA, you can make changes to the virtual appliance settings from your virtualization platform client interface. You use apcsetup as the username and password.
Network settings: You can configure an additional network adapter to enable the private network (APC LAN) as the apcsetup user or through the Data Center Expert client.
MAC Address settings: A unique MAC address is required for each Data Center Expert virtual appliance. If the MAC address originally assigned to the primary or secondary network interface is changed, an error will occur on the primary interface, and the virtual appliance will not start. A message will be displayed indicating the MAC address expected before normal startup will be allowed.
Hard disk settings: To increase storage for the virtual appliance, you can create additional hard disks. You cannot change the size of an existing hard disk, or remove a hard disk once it has been created. An error will occur on the primary interface, and the Data Center Expert virtual appliance will not start.
Changes in the disk space will take effect once the Data Center Expert virtual appliance has restarted.
The "Storage Settings" display, accessed from the Server Administration Settings option in the System menu, shows the total storage space available for the virtual appliance, not the individual hard disks.
Note: To store large amounts of surveillance data, using a remote repository is recommended.
RAM settings: You can add RAM to the Data Center Expert virtual appliance. You must gracefully shut down the virtual appliance to configure the settings.
CPU settings: You can add CPUs to the Data Center Expert virtual appliance. You must gracefully shut down the virtual appliance to configure the settings.
Note: VMware supports fault tolerance on virtual machines with 1 CPU only. Please refer to your vendor's documentation for more information about fault tolerance.
To increase storage for the VM, you must create additional hard disks. You cannot change the size of an existing hard disk or remove a hard disk once it has been created. An error will occur on the primary interface, and the DCE virtual appliance will not start.
Gracefully shut down the DCE virtual appliance: In the DCE desktop client, go to File > Shut down server.
In your vSphere (VMware) virtual server settings, select the option to add a hard disk.
NOTE: Do not increase the size of the default drive (18GB for DCE 7.9.2 and older, 40GB for new installs of DCE 7.9.3 and newer) or any other previously allocated drives. The server will not see the newly added and it may cause storage issues. Once the DCE VM boots with a new drive, it allocates the space for the database on that drive and will not re-allocate space if the drive size is increased. ALWAYS add a new drive when increasing drive space for the DCE VM.
Choose the hard disk size. For physical appliance equivalents, see Data Center Expert server equivalent configurations for VM
Choose thin or thick provisioning.
Power on the virtual appliance. NOTE: Changes in the disk space will take effect once the Data Center Expert virtual appliance has restarted. Do not shut down the virtual appliance while the disk reconfiguration process is running. TIP: To verify that the Data Center Expert Virtual Appliance sees the additional space, go to the DCE desktop client System > Server Administration Settings > Storage Settings option. The local repository option shows the combined amount of all of the disks added to the virtual appliance. NOTE: To store large amounts of surveillance data, using a remote repository is recommended.
See Complete update process for DCE
The virtual appliance demo version monitors up to five device nodes and one surveillance node. You can upgrade to a production version after the OVA is deployed. See the DCE sizing guide
Starting with Data Center Expert 8.1, VMware ESXi 6.7 is used as the reference virtualization platform for the Data Center Expert virtual appliance. Only E1000E and VMXNET3 network adapters are supported.
Download the *.ova file from the APC web site.
In your virtualization platform client interface, browse to the location of the *.ova file, and load the OVA. This may take several minutes. Alternatively, you may have the option to specify the URL for the *.ova file in your virtualization environment client interface.
Follow the prompts to accept the end user license agreement, and respond to options required to configure the OVA. Select thin provisioned disk format to allocate storage space on demand. Select thick provisioned disk format to allocate all storage space immediately.
You must provide the MAC Address, IP Address, hostname, and network settings before using the Data Center Expert virtual appliance.
Select the virtual appliance you created, and select the option to edit the virtual machine settings.
Specify the MAC Address for the virtual appliance manually. A unique MAC Address is required for each Data Center Expert. If the MAC Address originally assigned to the virtual appliance is changed, an error will occur on the primary interface, and the virtual appliance will not start.
Power on the virtual appliance.
In the console view, login to the virtual appliance using apcsetup as the username and password.
Within five seconds, press m to modify the settings.
Follow the prompts to specify the IP Address, hostname, subnet mask, and DNS servers for the virtual appliance.
After the virtual appliance has restarted, type its IP Address or hostname into a browser to login to the client.
You can add one additional network adapter to enable private networking. You cannot remove a network adapter once it has been added.
Gracefully shut down the virtual appliance.
Select the virtual appliance, and select the option to edit the virtual machine settings.
Select the options to add an ethernet adapter.
Specify the type and the network connection. Ensure this connection is mapped correctly, particularly when the DHCP server will be enabled on the private network interface.
Power on the virtual appliance.
In the console view, login to the virtual appliance using apcsetup as the username and password.
Within five seconds, press m to modify the settings.
Accept the settings you configured previously, or modify settings if needed.
Press y to accept the Enable private network interface option.
Specify whether you want to enable the DHCP server on the private network interface.
Use the DCE sizing guide to determine the hardware resources necessary for a virtual appliance fault tolerant configuration.
Note: The actual number of device nodes supported varies according to the device types discovered.
VMware ESXi versions older than 6.7 support fault tolerance on virtual machines with 1 CPU only. Please refer to your vendor's documentation for more information about fault tolerance.
Use the DCE sizing guide to determine the hardware resources necessary for a virtual appliance to monitor a given number of device nodes.
Note: VMware ESXi versions older than 6.7 support fault tolerance on virtual machines with 1 CPU only. Please refer to your vendor's documentation for more information about fault tolerance.
Disk space
The disk space required to monitor a given number of nodes varies according to the device types monitored and the amount of data you want to store. In DCE 7.9.3 and newer, the minimum hard disk size is 40GB, increased from 18GB in the previous versions.
To determine whether to add another hard disk, you can view available disk space in the "Storage Settings" display, accessed from the Server Administration Settings option in the System menu. View this display periodically to help determine how quickly the virtual appliance consumes disk space.
Note: To store large amounts of surveillance data, using a remote repository is recommended.
To migrate a standalone hardware server to a virtual appliance, you must purchase and apply an activation key. You must also contact APC Support for new node license keys and application license keys for the virtual appliance.
Perform a back up of the hardware server, using the Server Backup/Restore option, accessed from the Server Administration Settings option in the System menu.
Deploy the demo version OVA, and configure it using the hardware equivalents for the Basic, Standard, or Enterprise server from which you are migrating. The available disk space for the virtual appliance must be greater than the disk space used by the hardware server. You cannot restore to a virtual appliance with fewer CPU, fewer network adapters, less RAM, or less available disk space than the hardware server. See Deploying and configuring a Data Center Expert virtual appliance, and Data Center Expert server equivalent configurations for VM.
Perform a restore on the virtual appliance, using the Server Backup/Restore option, accessed from the Server Administration Settings option in the System menu. You cannot restore to a virtual machine other than the Data Center Expert virtual appliance.
Apply the activation key to the virtual appliance.
Login to the client. In the "License Keys" display, accessed from the Server Administration Settings option in the System menu, apply the new node license keys and application license keys you received from APC Support.
For information about supported configurations equivalent to Data Center Expert Basic, Standard, and Enterprise servers, see Data Center Expert virtual server equivalent configurations for VM.
Demo Configuration (minimum)
Hardware Resources
Up to five device nodes and one surveillance node
40 GB disk
1 GB RAM
1 CPU
1 network adapter
Thin provisioning
Maximum supported configuration
Hardware Resources
Up to 4000 device nodes (SNMPv1/Modbus)
or
Up to 1000 device nodes (SNMPv3)
1 TB disk
16 GB RAM
16 CPU
2 network adapters
Thin or thick provisioning
Note: The actual number of device nodes supported varies according to the device types discovered. See the DCE sizing guide
Note: VMware ESXi versions older than 6.7 support fault tolerance on virtual machines with 1 CPU only. Please refer to your vendor's documentation for more information about fault tolerance.
The virtual appliance equivalent configurations are based on Basic, Standard, and Enterprise server hardware configurations.
Use the DCE sizing guide to determine the hardware resources necessary for a virtual appliance to monitor a given number of device nodes.
Note: VMware ESXi versions older than 6.7 support fault tolerance on virtual machines with 1 CPU only. Please refer to your vendor's documentation for more information about fault tolerance.
Hardware Server
Virtual Appliance Equivalent
SNMPv1 and Modbus
Virtual Appliance Equivalent
SNMPv3
Data Center Expert Basic
Up to 500 device nodes supported
4 GB RAM
4 CPU
Up to 125 device nodes supported
4 GB RAM
4 CPU
Data Center Expert Standard
Up to 2000 device nodes supported
8 GB RAM
8 CPU
Up to 500 device nodes supported
8 GB RAM
8 CPU
Data Center Expert Enterprise
Up to 4000 device nodes supported
16 GB RAM
16 CPU
Up to 1000 device nodes supported
16 GB RAM
16 CPU
See Minimum and maximum Data Center Expert virtual appliance configurations
To upgrade from the demo to the virtual appliance production version, you must install the activation key.
Purchase the activation key for the virtual appliance.
Login to the client. In the "License Keys" display, accessed from the Server Administration Settings option in the System menu, and apply the activation key.
Apply the new virtual appliance node license keys and application license keys you received from APC Support. The upgrade is complete one you have applied the license and application keys. If you want to modify the virtual appliance settings, continue to Step 4.
In your virtualization platform client, gracefully shut down the virtual appliance.
Select the option to edit the virtual appliance settings.
Modify the hardware, if necessary. See Deploying and configuring a Data Center Expert virtual server and Data Center Expert server equivalent configurations for VM.
Power on the virtual appliance.