Welcome to the new Schneider Electric Community

It's your place to connect with experts and peers, get continuous support, and share knowledge.

  • Explore the new navigation for even easier access to your community.
  • Bookmark and use our new, easy-to-remember address (community.se.com).
  • Get ready for more content and an improved experience.

Contact SchneiderCommunity.Support@se.com if you have any questions.

Invite a Co-worker
Send a co-worker an invite to the Exchange portal.Just enter their email address and we’ll connect them to register. After joining, they will belong to the same company.
Send Invite Cancel

[Imported] Historic Files - Too many event records?

EcoStruxure Geo SCADA Expert Forum

Find out how SCADA systems and networks, like EcoStruxure Geo SCADA Expert, help industrial organizations maintaining efficiency, processing data for smarter decision making with IoT, RTU and PLC devices.

Janeway Janeway

[Imported] Historic Files - Too many event records?

>>Message imported from previous forum - Category:ClearSCADA Software<<
User: sbeadle, originally posted: 2019-03-07 09:30:40 Id:379
I have this alarm or something like it - why and what to do with it?

This feature is designed to protect ClearSCADA from bad configuration or bad devices reporting too much data.

The Warning level just raises an alarm, and the Maximum will cause ClearSCADA to discard events for that time period.

They apply to Historic value data, Events, Alarm Summary and Configuration Change data.

From 2017 R2, the Events configuration includes a stream limit and an object limit. (The stream limit will be higher – it bunches together events in each range of object ids).

Q - Is there a place where I can find the number of records for each files ?
I want to set good value in these parameters.

You can:
a) Check the largest files in Server Status, and Snapshot files. See the section Historic | Historian. Scroll right. (Record sizes are another column).
b) Use a query to find largest files and details:
select top (100) * from CDBEventFile order by RecordCount desc
(Also table CDBAlarmSummaryFile, CDBConfigChangesFile, CDBHistoricFile)

CDBHistoric data is one file per object per week.

Stream data in CDBEvent, CDBAlarmSummary and CDBConfigChanges are not one file per hour per object, they are one file per hour per stream. A stream is a range of objects with ids going from 0 to (say) 1023, then the next from 1024 to 2047 etc. You can set the stream size in the Server Config tool in the Historic Configuration pages. Adjust the field ‘Stream Size'.

a) Only change this before commissioning, and it's a lot quicker if you delete data first. (Stop, delete the relevant historic folder and restart). Otherwise you can be waiting a long time at startup for existing files to be reorganised. This is a good argument for doing system design early!
b) You are looking for a balance between file size (not too big) and number of stream files (not too many!) A single file larger than 50Mb could cause delays when queried, as the database is locked while it is loaded into memory.
c) Make sure you use the Events object limit configuration on the root group to stop logging events on an object basis. This is a good way to prevent ‘denial of service' by a field device logging too much data.

Hope this helps.