Help
  • Explore Community
  • Get Started
  • Ask the Community
  • How-To & Best Practices
  • Contact Support
Notifications
Login / Register
Community
Community
Notifications
close
  • Forums
  • Knowledge Center
  • Events & Webinars
  • Ideas
  • Blogs
Help
Help
  • Explore Community
  • Get Started
  • Ask the Community
  • How-To & Best Practices
  • Contact Support
Login / Register
Sustainability
Sustainability

Ask Me About Webinar: Data Center Assets - Modeling, Cooling, and CFD Simulation
Join our 30-minute expert session on July 10, 2025 (9:00 AM & 5:00 PM CET), to explore Digital Twins, cooling simulations, and IT infrastructure modeling. Learn how to boost resiliency and plan power capacity effectively. Register now to secure your spot!

Design Question: GeoSCADA Data Table vs External SQL Server DB

EcoStruxure Geo SCADA Expert Forum

Schneider Electric support forum about installation, configuration, integration and troubleshooting of EcoStruxure Geo SCADA Expert (ClearSCADA, ViewX, WebX).

cancel
Turn on suggestions
Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.
Showing results forย 
Showย ย onlyย  | Search instead forย 
Did you mean:ย 
  • Home
  • Schneider Electric Community
  • Remote Operations
  • EcoStruxure Geo SCADA Expert Forum
  • Design Question: GeoSCADA Data Table vs External SQL Server DB
Options
  • Mark Topic as New
  • Mark Topic as Read
  • Float this Topic for Current User
  • Bookmark
  • Subscribe
  • Mute
  • Printer Friendly Page
Invite a Co-worker
Send a co-worker an invite to the portal.Just enter their email address and we'll connect them to register. After joining, they will belong to the same company.
You have entered an invalid email address. Please re-enter the email address.
This co-worker has already been invited to the Exchange portal. Please invite another co-worker.
Please enter email address
Send Invite Cancel
Invitation Sent
Your invitation was sent.Thanks for sharing Exchange with your co-worker.
Send New Invite Close
Top Experts
User Count
sbeadle
Kirk sbeadle Kirk
309
AndrewScott
Admiral AndrewScott
99
BevanWeiss
Spock BevanWeiss
92
AdamWoodlandToo
Lt. Commander AdamWoodlandToo
38
View All
Related Products
product field
Schneider Electric
EcoStruxureโ„ข Geo SCADA Expert

Invite a Colleague

Found this content useful? Share it with a Colleague!

Invite a Colleague Invite
Solved Go to Solution
Back to EcoStruxure Geo SCADA Expert Forum
Solved
MikeSI
Ensign MikeSI
Ensign

Posted: โ€Ž2024-11-23 05:18 AM

0 Likes
6
890
  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Permalink
  • Print
  • Email to a Friend
  • Report Inappropriate Content

Link copied. Please paste this link to share this article on your social media post.

Posted: โ€Ž2024-11-23 05:18 AM

Design Question: GeoSCADA Data Table vs External SQL Server DB

Hi all,

 

I'm working on a project which involves storing, gathering and querying data from a few sources (both within GeoSCADA and also external).

 

I'm at the tinkering stage and tossing up whether to use Data Tables, or to use an external SQL database (such as SQL Server Express) for the data storage.

 

Has anyone been down this road before? Any lessons learnt?

Is there a 'best practice' solution?

 

Seems to me, if I go with Data Tables the issue will be accessing and storing data from outside GeoSCADA.

 

If I go with an external DB, the issue will be accessing/storing data from inside GeoSCADA. The only way I understand to do this is using ODBC linked tables, and I'm not sure how reliable this is.

 

Any advice much appreciated.

Labels
  • Labels:
  • SCADA
  • Tags:
  • english
  • scada
  • SCADA app
  • SCADA software
  • SCADA tutorial
  • Telemetry and SCADA
Reply

Link copied. Please paste this link to share this article on your social media post.

  • All forum topics
  • Previous Topic
  • Next Topic

Accepted Solutions
tfranklin
Commander tfranklin
Commander

Posted: โ€Ž2024-11-25 07:25 AM

1 Like
1
871
  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Permalink
  • Print
  • Email to a Friend
  • Report Inappropriate Content

Link copied. Please paste this link to share this article on your social media post.

Posted: โ€Ž2024-11-25 07:25 AM

There's a lot of "it depends" when you start to design stuff like this.  Some questions you'll need to consider:

  1. What generates the data?  Is it client driven via script?  Is it logic? Is it both?
  2. How much data is being stored?
  3. What is the frequency of which data is updated/inserted/deleted into the table?
  4. If I store it externally, how can we ensure the data is redundant/fault tolerant and available across all GS servers?

In general, my recommendations are always that if you're going to keep it in a data table in Geo SCADA you want to ensure the table never exceeds 100MB.  Going beyond that could cause issues with server synchronization.  Typically we store as much in the GS data tables as we can until we see that growth might become and issue.  We also have some utilities that are explicitly written to store data outside of GS when we know that size will become an issue. 

  • If you're storing data inside of a data table:
    • Avoid frequent writes
    • Don't exceed 100MB as a rule of thumb (a little more is probably okay)
    • Consider pre-loading the data (checkbox on the properties) if the table is large
    • Maybe consider a background process for purging cache on it

 

  • If you choose to store data in SQL
    • Ensure you have a redundancy plan.
    • Utilize Linked ODBC tables to perform CRUD operations from within Geo SCADA
    • Avoid using Linked ODBC tables in logic.  It's a death-wish. 
    • You can store far more data but will need to consider the load of how a client will hit it.
    • You can utilize SQL views to do a lot of complex stuff for pre-processing

 

I'm sure somebody will disagree with something above, but in general the answer is always that it depends.  I'd consider the size and rate of which you write to the tables to make a decision.  If it's infrequent and < 100MB of data you're probably fine leaving it in a data table within GS. 

See Answer In Context

Reply

Link copied. Please paste this link to share this article on your social media post.

Replies 6
tfranklin
Commander tfranklin
Commander

Posted: โ€Ž2024-11-25 07:25 AM

1 Like
1
872
  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Permalink
  • Print
  • Email to a Friend
  • Report Inappropriate Content

Link copied. Please paste this link to share this article on your social media post.

Posted: โ€Ž2024-11-25 07:25 AM

There's a lot of "it depends" when you start to design stuff like this.  Some questions you'll need to consider:

  1. What generates the data?  Is it client driven via script?  Is it logic? Is it both?
  2. How much data is being stored?
  3. What is the frequency of which data is updated/inserted/deleted into the table?
  4. If I store it externally, how can we ensure the data is redundant/fault tolerant and available across all GS servers?

In general, my recommendations are always that if you're going to keep it in a data table in Geo SCADA you want to ensure the table never exceeds 100MB.  Going beyond that could cause issues with server synchronization.  Typically we store as much in the GS data tables as we can until we see that growth might become and issue.  We also have some utilities that are explicitly written to store data outside of GS when we know that size will become an issue. 

  • If you're storing data inside of a data table:
    • Avoid frequent writes
    • Don't exceed 100MB as a rule of thumb (a little more is probably okay)
    • Consider pre-loading the data (checkbox on the properties) if the table is large
    • Maybe consider a background process for purging cache on it

 

  • If you choose to store data in SQL
    • Ensure you have a redundancy plan.
    • Utilize Linked ODBC tables to perform CRUD operations from within Geo SCADA
    • Avoid using Linked ODBC tables in logic.  It's a death-wish. 
    • You can store far more data but will need to consider the load of how a client will hit it.
    • You can utilize SQL views to do a lot of complex stuff for pre-processing

 

I'm sure somebody will disagree with something above, but in general the answer is always that it depends.  I'd consider the size and rate of which you write to the tables to make a decision.  If it's infrequent and < 100MB of data you're probably fine leaving it in a data table within GS. 

Reply

Link copied. Please paste this link to share this article on your social media post.

MikeSI
Ensign MikeSI
Ensign

Posted: โ€Ž2024-11-27 03:58 PM

In response to tfranklin
0 Likes
0
821
  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Permalink
  • Print
  • Email to a Friend
  • Report Inappropriate Content

Link copied. Please paste this link to share this article on your social media post.

Posted: โ€Ž2024-11-27 03:58 PM

Thank you @tfranklin for this detailed answer. That's great information.

 

I've decided to go with Data Tables, and I have some logic which exports data and also trims the table to prevent it getting too large. I looked at ways to dynamically create new data tables (say every month or year) but that got too hard.

 

I found this statement funny and intriguing:

Avoid using Linked ODBC tables in logic.  It's a death-wish.


This was exactly my original plan ๐Ÿ˜ Any particular reason why it's a big no-no?

Does this mean that CRUD operations on external DBs from Logic is not possible and is only possible from client script using linked ODBC?

 

Thanks again!

Reply

Link copied. Please paste this link to share this article on your social media post.

AdamWoodlandToo
Lt. Commander AdamWoodlandToo
Lt. Commander

Posted: โ€Ž2024-11-27 08:02 PM

0 Likes
1
833
  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Permalink
  • Print
  • Email to a Friend
  • Report Inappropriate Content

Link copied. Please paste this link to share this article on your social media post.

Posted: โ€Ž2024-11-27 08:02 PM

I don't disagree with any of the above really, I'll just chip in some other things to consider:

 

1. Who in the business owns the data? Quite often with non-TSV style data despite it being used in SCADA it might not be the team who looks after the SCADA that owns it so why should they be the people who hold it and take responsibility for it, as well as maintaining who has permissions to it

 

2. Do you have, or will you have, multiple Geo SCADA systems that need the same data? Having a single source of truth to the data would be better than duplication and so might drive where best to store it

 

And yes, linked tables and logic are a bad combination due to the expecations logic has when it reads and writes the variables

Reply

Link copied. Please paste this link to share this article on your social media post.

MikeSI
Ensign MikeSI
Ensign

Posted: โ€Ž2024-11-29 11:52 AM

In response to AdamWoodlandToo
0 Likes
0
771
  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Permalink
  • Print
  • Email to a Friend
  • Report Inappropriate Content

Link copied. Please paste this link to share this article on your social media post.

Posted: โ€Ž2024-11-29 11:52 AM

Thanks @AdamWoodlandToo . Both really good points to consider.

 

Reply

Link copied. Please paste this link to share this article on your social media post.

AdamWoodlandToo
Lt. Commander AdamWoodlandToo
Lt. Commander

Posted: โ€Ž2024-12-02 04:08 PM . Last Modified: โ€Ž2024-12-02 04:18 PM

0 Likes
1
765
  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Permalink
  • Print
  • Email to a Friend
  • Report Inappropriate Content

Link copied. Please paste this link to share this article on your social media post.

Posted: โ€Ž2024-12-02 04:08 PM . Last Modified: โ€Ž2024-12-02 04:18 PM


Any particular reason why it's a big no-no?

Does this mean that CRUD operations on external DBs from Logic is not possible and is only possible from client script using linked ODBC?


Basically the way logic is designed to run is very quick short executions, think reading inputs, running and writing outputs in sub-millisecond territory and whilst there is some queues used it is a fairly linear process. This allows lots and lots and lots of logic to run in a single thread whilst maintaining an integrity that multi-threaded just doesn't support (at least 25 years ago when it was first written*). Some indepth info I wrote many years ago with the then development manager of ClearSCADA at https://community.se.com/t5/Geo-SCADA-Knowledge-Base/Logic-Execution/ba-p/279061 which is still valid today.

 

Now the problem is logic (structured text) was really the only server-side programing language we had, so people started using it for more than what it was originally designed for and to allow us to do that the developers added the ability to use SQL (%S and %D), which was frankly one of the most amazing additions to logic ever. But the problem then become inputs (and outputs) changed from sub millisecond to millisecond+ severly impacting scaling (think templates) and the speed of what logic can do. But if you're querying the realtime memory resident tables and making they query efficient it works, but when you start querying the disk based tables (CDBHistoric, CDBEventJournal) things start to cause some serious issues.

 

And this is where linked tables come in. First off it goes outside DBServer so there is a speed impact there, then you're querying a third party database which may or may not be fast, and because logic has to wait for the results to come back it effectively stalls logic and this means a lock on the database for an extended time which may or may not be able to be released temporarily. The NOCACHE keyword came around the same time period but for a different purpose (accumulation processing of batched data due to the queuing of logic inputs causing a problem with the maths) but turns out the move in the logic process from a early write lock to a late read lock also helps with the SQL side of things.

 

Compared to scripting which is client-side and the database locks can be released easily, where the impact of any stalls is much much less.

 

And then to compound the problem, I have used a couple of logic programs and linked tables about 15 years ago against Oracle, and as the logic ran once every 10 mins and I made sure the query was a millisecond or two wasn't identified as a problem and no one had been bitten, yet. Until one day the Oracle server dissappeared, and so when the query was issued then instead of the Oracle components coming back saying offline (eventually, would still an issue with the timeout delay under normal circumstances) the API call never replied to DBServer! A bug in the Oracle client software but basically ClearSCADA sat there for 20 mins with a lock on the database that couldn't be released, and so the system locked up. After 20 mins the next logic program could run, and again 20 mins of a database lockup, after which the other program was scheduled to run... and you get the point. Oracle came back online and the problem fixed and the system recovered.

 

The customer was quite nice about it really, at the time I felt very guilty when I ran through the logic process in my head and realised the problem, and still do a little to this day. I should have thought it through more and identified then the problems as I had the info, rather than just trying to solve a customer's problem.

 

So that is one issue that triggered the advise not to use logic and linked tables came out. Logic isn't really purposed for "slow" read/writes whatever the source is, and you're at the mercy of third party components impacting the availability of the system.

 

With the addition of Python into the server things will hopefully be improved, we'll see!

 

(*) When SCX6 (ClearSCADA) was originally written and I first got my hands on it in 2002 there was a Registry setting to enable multi-threaded logic execution. It appeared to work but was removed from the product pretty quickly, not sure why...

Reply

Link copied. Please paste this link to share this article on your social media post.

MikeSI
Ensign MikeSI
Ensign

Posted: โ€Ž2024-12-04 04:49 AM

In response to AdamWoodlandToo
0 Likes
0
733
  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Permalink
  • Print
  • Email to a Friend
  • Report Inappropriate Content

Link copied. Please paste this link to share this article on your social media post.

Posted: โ€Ž2024-12-04 04:49 AM

Thanks Adam !! That insight was both useful and really interesting.

 

ps: don't feel guilty about the Oracle oversight. I think we can all fess up and say we've done something similar (or much worse ๐Ÿ™ƒ).

Reply

Link copied. Please paste this link to share this article on your social media post.

Preview Exit Preview

never-displayed

You must be signed in to add attachments

never-displayed

ย 
To The Top!

Forums

  • APC UPS Data Center Backup Solutions
  • EcoStruxure IT
  • EcoStruxure Geo SCADA Expert
  • Metering & Power Quality
  • Schneider Electric Wiser

Knowledge Center

Events & webinars

Ideas

Blogs

Get Started

  • Ask the Community
  • Community Guidelines
  • Community User Guide
  • How-To & Best Practice
  • Experts Leaderboard
  • Contact Support
Brand-Logo
Subscribing is a smart move!
You can subscribe to this board after you log in or create your free account.
Forum-Icon

Create your free account or log in to subscribe to the board - and gain access to more than 10,000+ support articles along with insights from experts and peers.

Register today for FREE

Register Now

Already have an account? Login

Terms & Conditions Privacy Notice Change your Cookie Settings ยฉ 2025 Schneider Electric

This is a heading

With achievable small steps, users progress and continually feel satisfaction in task accomplishment.

Usetiful Onboarding Checklist remembers the progress of every user, allowing them to take bite-sized journeys and continue where they left.

of