EcoStruxure Geo SCADA Expert Forum
Schneider Electric support forum about installation, configuration, integration and troubleshooting of EcoStruxure Geo SCADA Expert (ClearSCADA, ViewX, WebX).
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-04-06 07:07 AM . Last Modified: 2023-05-03 12:15 AM
Hi,
we have a systems with 24550 tags counts. We have noticed that there are multiple number of files in Journal folder and total size increase to 6GB.
1. How to reduce the number of files.
2. How do I reduce the file size or increase the file size.
3. How to optimize the file size to improve data transfer between redundant servers.
4. Is it possible to get all historical enable tags using SQL script.
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-04-06 10:18 AM
If you are really just concerned with the Sync time then go into the Server configuration and set the Archive for the Event Journal and Historic Data to 1 week or a longer time frame that you are sure you won't want to edit the events or history. It won't have to sync anything that is older than the Archive setting, because that data is set to not be changeable so it can be archived if you want to archive it. Personally I don't archive anything, it is a pain in the butt because it is made to archive to removable media. Where history preservation is important my customers rely on system backups.
Size is controlled by how much is written to the Events or History.
On your points with history enabled, from the Historic tab, you can apply compression to limit how much is being written.
Points and other database objects that have events configured will record an event every time that condition is true.
If you think some of these events that are configured are unnecessary set them to None and this will reduce the quantity.
Yes, you can check for the History setting from SQL, but I don't remember what you need to do to get it.
I would just use the Bulk Edit tool. Check the entire database then select each point type and then check the historic enabled. Export it to an excel file.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-04-06 08:33 PM . Last Modified: 2020-04-06 08:34 PM
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-04-06 08:33 PM . Last Modified: 2020-04-06 08:34 PM
It's kinda odd that you're talking about historic and alarm summary in your title but only the event journal in your post. Most of the above will apply to all three, but I'd encourage you to read the relevant help pages for each.
1. The number of files is a function of the number of objects and the time period, and the stream size. We don't usually recommend changing the stream size, this can have an impact on performance. These settings are on server config under Historic Configuration.
2. The file size is a function of the number of entries in the time period. You should identify overactive objects and resolve issues with them, whether that is a physically noisy point, inadequate compression or DNP3 event generations settings, or an outstation that is raising a lot of alarms. For existing data, running Compress Database from the server icon and the Start Historic Optimisation method on the root group historian aggregate can help (again, look them up in Help for details on what they do).
3. You can compress data that is sent between redundant servers by ticking the compress check box in the partner settings. This has a performance penalty associated - refer to Help for guidance.
4. Yes, but you would need to JOIN every table that has the aggregate on it. This will vary with what drivers you're using.
SELECT
FULLNAME, HISTORIC
FROM
<list of tables>
To get the tables you'll need to know how to use the schema. CDBPoint I think will have everything you're concerned with. From there dig down to find what is relevant to your system that has the historic aggregate.
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-04-07 01:24 AM
This query will list objects with historic data enabled:
SELECT FullName from CHistory INNER JOIN CDBObject ON CHistory.Id=CDBObject.Id
Edit this SQL into any query window - or use the QueryPad utility (remember to hit ctrl-Enter to run a query.
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-04-07 02:14 AM
I would say that the most likely cause of incredibly large 'Historic' data is the incorrect setting of 'Event' severity.
It is an absolute bug-bear of mine when people use this setting.
An Event Journal entry takes up 768bytes. A historical data entry takes 32bytes.
So my (not so humble) opinion is that you should NEVER set anything with 'Event' severity type. If you care about the state of something enough, then it possibly should be an alarm.
If you only care about the state of it retrospectively (i.e. when you want to audit it, or similar later) then you can probably handle this entirely well with just historical data, and not having an Event entry.
I'd also follow the advice of the others 🙂
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-04-06 10:18 AM
If you are really just concerned with the Sync time then go into the Server configuration and set the Archive for the Event Journal and Historic Data to 1 week or a longer time frame that you are sure you won't want to edit the events or history. It won't have to sync anything that is older than the Archive setting, because that data is set to not be changeable so it can be archived if you want to archive it. Personally I don't archive anything, it is a pain in the butt because it is made to archive to removable media. Where history preservation is important my customers rely on system backups.
Size is controlled by how much is written to the Events or History.
On your points with history enabled, from the Historic tab, you can apply compression to limit how much is being written.
Points and other database objects that have events configured will record an event every time that condition is true.
If you think some of these events that are configured are unnecessary set them to None and this will reduce the quantity.
Yes, you can check for the History setting from SQL, but I don't remember what you need to do to get it.
I would just use the Bulk Edit tool. Check the entire database then select each point type and then check the historic enabled. Export it to an excel file.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-04-06 08:33 PM . Last Modified: 2020-04-06 08:34 PM
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-04-06 08:33 PM . Last Modified: 2020-04-06 08:34 PM
It's kinda odd that you're talking about historic and alarm summary in your title but only the event journal in your post. Most of the above will apply to all three, but I'd encourage you to read the relevant help pages for each.
1. The number of files is a function of the number of objects and the time period, and the stream size. We don't usually recommend changing the stream size, this can have an impact on performance. These settings are on server config under Historic Configuration.
2. The file size is a function of the number of entries in the time period. You should identify overactive objects and resolve issues with them, whether that is a physically noisy point, inadequate compression or DNP3 event generations settings, or an outstation that is raising a lot of alarms. For existing data, running Compress Database from the server icon and the Start Historic Optimisation method on the root group historian aggregate can help (again, look them up in Help for details on what they do).
3. You can compress data that is sent between redundant servers by ticking the compress check box in the partner settings. This has a performance penalty associated - refer to Help for guidance.
4. Yes, but you would need to JOIN every table that has the aggregate on it. This will vary with what drivers you're using.
SELECT
FULLNAME, HISTORIC
FROM
<list of tables>
To get the tables you'll need to know how to use the schema. CDBPoint I think will have everything you're concerned with. From there dig down to find what is relevant to your system that has the historic aggregate.
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-04-07 01:24 AM
This query will list objects with historic data enabled:
SELECT FullName from CHistory INNER JOIN CDBObject ON CHistory.Id=CDBObject.Id
Edit this SQL into any query window - or use the QueryPad utility (remember to hit ctrl-Enter to run a query.
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-04-07 02:14 AM
I would say that the most likely cause of incredibly large 'Historic' data is the incorrect setting of 'Event' severity.
It is an absolute bug-bear of mine when people use this setting.
An Event Journal entry takes up 768bytes. A historical data entry takes 32bytes.
So my (not so humble) opinion is that you should NEVER set anything with 'Event' severity type. If you care about the state of something enough, then it possibly should be an alarm.
If you only care about the state of it retrospectively (i.e. when you want to audit it, or similar later) then you can probably handle this entirely well with just historical data, and not having an Event entry.
I'd also follow the advice of the others 🙂
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2020-04-09 12:15 AM
thank you all..!
Link copied. Please paste this link to share this article on your social media post.
Create your free account or log in to subscribe to the board - and gain access to more than 10,000+ support articles along with insights from experts and peers.