EcoStruxure Geo SCADA Expert Forum
Schneider Electric support forum about installation, configuration, integration and troubleshooting of EcoStruxure Geo SCADA Expert (ClearSCADA, ViewX, WebX).
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-09-03 10:49 PM . Last Modified: 2023-05-03 12:11 AM
We're looking to deploy a Virtual ViewX solution for a customer to support 25 concurrent client connections. The GeoSCADA installation doco recommends (for 20 clients) host with CPU passmark=15000, 2D Graphics passmark 300, 10 Cores, 32GB RAM, 1GB network. I'm interested in whether anyone has any real life experience deploying a VVX server for this order of clients and can offer any insight or pit-falls to watch out for when spec'ing the server. What have you used for the host - physical or virtual, if virtual with dedicated graphics resources, how successful was this in practice? I'm really just wanting to draw on others experience to avoid known issues and propose best solution for the customer, as I haven'y personally deployed the VVX solution before... Thanks!
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-09-07 10:50 PM
I haven't tried as high as 25, max so far is 10 using 8 cores/16GB of RAM. So far no complaints have made it my way, yet, but I'm not the active implementer or user of the system.
Some risk might be able to be managed by using a load balancer (e.g. cloud load balancer, web app firewall, or even just a small nginx on Linux) in-front of multiple servers to manage load. Haven't tried this so there may be some gotchas, but on paper it should/could work. It does use websockets so that might cause some problems as it did with OWASP rules in the WAF when I tried those.
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-09-11 07:06 AM
Yesterday I saw 50 (yes, 50!) VVX web clients log in to an Azure F16v2 Server - 16 core CPU with SSD and 32Gb RAM.
Performance was tight, and complex mimics would stress the server a lot, but we had each client up with an alarm banner and new mimic changes up every 5 seconds. An F32 server with more cores would handle it better.
Note: to get this up and running with this number of clients, you must have a separate VVX server from Geo SCADA, and change the settings for sessions. Go to the VVX Manager applet. pick the Sessions tab. At the bottom choose "Multiple browser per session" and set a session count of 5. (All 50 users are still kept separate, but this allocates server resources to enable this sort of scalability).
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-09-14 05:39 PM
That is great to hear.
Was this with the default ViewX logging still enabled?
In some recent testing we've found the ViewX logging to be quite the bottle neck in general ViewX performance.
It would be great if you had the opportunity to disable logging on this server to just compare the performance.
The NVMe RAID SSDs that the F16v2 (and F32) use should obviously mitigate many of the disk 'media' read/write performance issues, but I'm running an NVMe SSD setup on my computer also, and just the disk subsystem impact (i.e. all the code executed to get a byte from the application to the disk) seemed to have a significant negative impact on performance.
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-09-07 10:50 PM
I haven't tried as high as 25, max so far is 10 using 8 cores/16GB of RAM. So far no complaints have made it my way, yet, but I'm not the active implementer or user of the system.
Some risk might be able to be managed by using a load balancer (e.g. cloud load balancer, web app firewall, or even just a small nginx on Linux) in-front of multiple servers to manage load. Haven't tried this so there may be some gotchas, but on paper it should/could work. It does use websockets so that might cause some problems as it did with OWASP rules in the WAF when I tried those.
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-09-11 07:06 AM
Yesterday I saw 50 (yes, 50!) VVX web clients log in to an Azure F16v2 Server - 16 core CPU with SSD and 32Gb RAM.
Performance was tight, and complex mimics would stress the server a lot, but we had each client up with an alarm banner and new mimic changes up every 5 seconds. An F32 server with more cores would handle it better.
Note: to get this up and running with this number of clients, you must have a separate VVX server from Geo SCADA, and change the settings for sessions. Go to the VVX Manager applet. pick the Sessions tab. At the bottom choose "Multiple browser per session" and set a session count of 5. (All 50 users are still kept separate, but this allocates server resources to enable this sort of scalability).
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-09-14 05:39 PM
That is great to hear.
Was this with the default ViewX logging still enabled?
In some recent testing we've found the ViewX logging to be quite the bottle neck in general ViewX performance.
It would be great if you had the opportunity to disable logging on this server to just compare the performance.
The NVMe RAID SSDs that the F16v2 (and F32) use should obviously mitigate many of the disk 'media' read/write performance issues, but I'm running an NVMe SSD setup on my computer also, and just the disk subsystem impact (i.e. all the code executed to get a byte from the application to the disk) seemed to have a significant negative impact on performance.
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-10-15 08:47 AM
ViewX logging was not enabled. However, there are changes proposed to logging to make it more granular and less noisy.
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-10-19 03:36 PM
I see that GSE 2020 already includes some of the 'less noisy' works. The cached registry read logging, which had been one of the huge performance impacts from my bench testing, is now suppressed.
Definitely looking forward to seeing more of the configurable logging with ViewX.
For the most part we don't have issues, so would prefer to be able to configure it to have very low overhead (i.e. disabled / minimal logging).
And we typically only report issues when they are reproducible, or at least recurring. And in those situations we can enable logging for a period (and suffer the performance hit) to obtain detailed 'error' results.
Link copied. Please paste this link to share this article on your social media post.
Create your free account or log in to subscribe to the board - and gain access to more than 10,000+ support articles along with insights from experts and peers.