EcoStruxure Geo SCADA Expert Forum
Schneider Electric support forum about installation, configuration, integration and troubleshooting of EcoStruxure Geo SCADA Expert (ClearSCADA, ViewX, WebX).
Link copied. Please paste this link to share this article on your social media post.
Posted: 2021-11-28 05:16 PM . Last Modified: 2023-05-03 12:00 AM
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2021-11-28 06:36 PM
While looking at the other example projects I found this project on github:
https://github.com/GeoSCADA/Sample-DataFeeder
"The feeder uses the Advanced section of the client, which adds ‘on change’ functionality to points. In other words, if a point does not change then the data export process does not waste time interrogating the database and writing out unnecessary data."
"The purpose of the Data Feeder is to give a reasonably efficient way of exporting point data (value, quality and timestamps) to another system external to Geo SCADA."
@sbeadle do you think this will be scalable (to 1M+ tags) if I change the MQTT output from JSON to Sparkplug?
Thank-you for all the excellent examples and great documentation & videos!
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2022-02-02 09:29 AM
We have taken the DataFeeder code and have added onto it and plan to release it as a supported product in the near future. SP-B is one of our planned outputs for future.
Scalability... I think this less a function of the protocol/data format and more about the source (Geo SCADA server) and destination (Event Hub/Time Series Insights for example can only ingest so much so fast). The DataFeeder code takes a fairly conservative approach to loading up the different parts of the system.
If you want assistance with setting something up feel free to reach out privately. I am willing to share some of the things we have learned along the way.
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2022-02-03 01:46 AM
Exporting 1M points would be a challenge (depends on data rate too)! At some point the Data Feeder sample code becomes limited, because:
a) the check interval would have to be quite long, as the system would get bogged down processing data reads. You might need a 15 or 30+ minute interval.
b) On startup the feeder checks and sets on-change in a call for every point. This would take a long time to get going from start or changeover. Could take an hour to start.
Both the above would load up the server, and you might then have to use a dedicated Standby-Only.
At this scale you'd be better off using the HSI software from the UK projects team. This uses the same internal mechanism as the WWH or eDNA historian feeds, and is a direct historian to historian link.
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2022-03-30 12:15 AM
Thanks for the details @sbeadle, sorry for the slow reply, I was pulled onto another project.
"At this scale you'd be better off using the HSI software from the UK projects team."
Can you please point me in the right direction where to find more information about this?
Are there any other options you would consider:
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2022-03-30 01:30 AM
To inquire about the UK team's HSI, please contact your SE reps and they will find their way to that offer.
Apart from using a DNP3 Slave RTU (suitable for relatively small point counts - few thousand points), the only driver-based options are Pull, not Push.
The Client API, as described above will allow you to go to 100K-ish (YMMV) points to 'watch' and export with some care over architecture and update times.
Beyond that you need a high performance connector which works at database level. The eDNA and WWH/Insights connectors do this but only to their own database systems.
Link copied. Please paste this link to share this article on your social media post.
Create your free account or log in to subscribe to the board - and gain access to more than 10,000+ support articles along with insights from experts and peers.