Help
  • Explore Community
  • Get Started
  • Ask the Community
  • How-To & Best Practices
  • Contact Support
Notifications
Login / Register
Community
Community
Notifications
close
  • Forums
  • Knowledge Center
  • Events & Webinars
  • Ideas
  • Blogs
Help
Help
  • Explore Community
  • Get Started
  • Ask the Community
  • How-To & Best Practices
  • Contact Support
Login / Register
Sustainability
Sustainability

Ask Me About Webinar: Data Center Assets - Modeling, Cooling, and CFD Simulation
Join our 30-minute expert session on July 10, 2025 (9:00 AM & 5:00 PM CET), to explore Digital Twins, cooling simulations, and IT infrastructure modeling. Learn how to boost resiliency and plan power capacity effectively. Register now to secure your spot!

Building Automation Knowledge Base

Schneider Electric Building Automation Knowledge Base is a self-service resource to answer all your questions about EcoStruxure Building suite, Andover Continuum, Satchwell, TAC…

cancel
Turn on suggestions
Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.
Showing results for 
Show  only  | Search instead for 
Did you mean: 
  • Home
  • Schneider Electric Community
  • Knowledge Center
  • Building Automation Knowledge Base
Options
  • My Knowledge Base Contributions
  • Subscribe
  • Bookmark
  • Invite a Friend
Invite a Co-worker
Send a co-worker an invite to the portal.Just enter their email address and we'll connect them to register. After joining, they will belong to the same company.
You have entered an invalid email address. Please re-enter the email address.
This co-worker has already been invited to the Exchange portal. Please invite another co-worker.
Please enter email address
Send Invite Cancel
Invitation Sent
Your invitation was sent.Thanks for sharing Exchange with your co-worker.
Send New Invite Close
Labels
Top Labels
  • Alphabetical
  • Andover Continuum 2,209
  • TAC Vista 2,045
  • EcoStruxure Building Operation 1,851
  • TAC IA Series 1,825
  • TAC INET 1,458
  • Field Devices 721
  • Satchwell BAS & Sigma 474
  • EcoStruxure Security Expert 332
  • Satchwell MicroNet 252
  • EcoStruxure Building Expert 228
  • EcoStruxure Access Expert 149
  • CCTV 53
  • Project Configuration Tool 47
  • EcoStruxure Building Activate 17
  • EcoStruxure Building Advisor 12
  • ESMI Fire Detection 10
  • Automated Engineering Tool 5
  • EcoStruxure Building Data Platform 3
  • EcoStruxure Workplace Advisor 1
  • EcoStruxure for Retail - IMP 1
  • Previous
  • 1 of 2
  • Next
Top Contributors
  • Product_Support
    Product_Support
  • DavidFisher
    DavidFisher
  • Cody_Failinger
    Cody_Failinger
See More Contributors
Related Products
Thumbnail of EcoStruxure™ Building Operation
Schneider Electric
EcoStruxure™ Building Operation
4
Thumbnail of SmartX IP Controllers
Schneider Electric
SmartX IP Controllers
1
Thumbnail of EcoStruxure™ Building Advisor
Schneider Electric
EcoStruxure™ Building Advisor
1

Related Forums

  • Intelligent Devices Forum

Previous Next

Invite a Colleague

Found this content useful? Share it with a Colleague!

Invite a Colleague Invite

Building Automation Knowledge Base

Sort by:
Views
  • Date
  • Views
  • Likes
  • Helpfulness
Options
  • Subscribe
  • Bookmark
  • Invite a Friend
  • « Previous
    • 1
    • …
    • 187
    • 188
    • 189
    • …
    • 508
  • Next »

Second Security Key disables one or more features of the first Security Key on the same computer

Issue CyberStation and webClient on same computer. Second Security Key disables one or more features of the fist Security Key Product Line Andover Continuum Environment Andover Continuum CyberStation Andover Continuum webClient Cause Second Security Key inserted into USB port on workstation or only one computer to use. Resolution Only one Security Key (dongle, USB Key) per workstation(PC Computer). It is required that the key be installed to run the Continuum Cyberstation or WebClient server. Never install Cyberstation and webClient on the same workstation. Even on a LAN environment, there needs to be separate computers for Cyberstation and Webclient.  Note: webClient has some services disabled to allow for a more robust web capabilities. To see what is enabled on a Security Key, check out How to identify what is enabled on a Continuum Security Key.
View full article
Picard Product_Support
‎2018-09-07 04:20 AM

Last Updated: Administrator CraigEl Administrator ‎2022-08-09 05:19 AM

Labels:
  • Andover Continuum
1578 Views

Connection Objects in Xenta Server effect on I/NET system

Issue What are the effects of a connection object on the I/NET controller lan? Product Line Satchwell MicroNet, TAC INET, TAC Vista Environment Xenta 527/913/731 I/NET Cause The I/NET communication protocol has a maximum baud rate of 19,200 bps and this allows for a maximum of 10 messages a second to occur on the I/NET controller lan. This limit applies to the entire controller lan not just a single controller. Optimal performance will be at 5 or 6 messages a second on the controller lan. Resolution Any connection objects involving I/NET points need to be aware of the limits imposed by the controller lan protocol and adjust the periods of connection objects to avoid exceeding these limits. Every connection object that the source is from I/NET requires the value to be read from the I/NET controller before it can determine if the value has changed to send to the target of the connection object. If the default period of 10 seconds is used for all connection objects then only 100 points will consume 100% of the bandwidth on the I/NET controller lan to meet the demand and negatively impact the performance of the I/NET network. Instead count the number of connection objects with source values from I/NET. Divide this number of 4 to allow room for values being written to the I/NET network to determine the minimum recommended period for your connection objects to avoid overloading your I/NET controller lan. Ex: 250 points being sent from I/NET to a 3rd party system / 4 writes a second = 63 second period. This could result in some larger then expected periods and some points might be more critical then others which will require some prioritization. If of the 250 points only 15 require faster response time then they can use a lower period, but this will affect the period used for the lower priority points. It is recommended to avoid periods below 10 seconds if at all possible. Ex: 235 points at lower priority / 4 messages a second = 59 second period 59 second period for low priority points / 10 second period of high priority points = 6 polls per point * 15 points at high priority = 90 reads for high priority points. 235 points + 90 polls = 325 values read from I/NET / 4 messages a second = 82 second period for lower priority points to accommodate 15 high priority points.
View full article
Picard Product_Support
‎2018-09-06 01:23 PM

Labels:
  • Satchwell MicroNet
  • TAC INET
  • TAC Vista
1587 Views

I/A Series R2 oBIX User Guide

Issue I/A Series R2 oBIX User Guide Product Line TAC IA Series Environment I/A Series R2 oBIX driver Cause The R2 oBIX User Guide is not installed with I/A Series R2 version 529 and earlier. It is available with the installation of I/A Series R2 version 532 and later.  Resolution Niagara release 2 oBIX User Guide
View full article
Picard Product_Support
‎2018-09-11 06:14 AM

Last Updated: Lt. Commander Ramaaly Lt. Commander ‎2022-03-31 09:58 AM

Labels:
  • TAC IA Series
1598 Views

I/A Series (Niagara) R2 UNC won't boot after downgrading system software to a version earlier than release 529

Issue After using the I/A Series (Niagara) R2 Admin Tool Installation Wizard "Upgrade" process to install an earlier version of the R2 OS and NRE, the UNC will no longer reboot properly. The heartbeat LED remains turns on and stays on. The software was downgraded to a release earlier than R2.301.527. While watching the bootup process on a serial console connection, the bootup log shows the following message just before the UNC reboots: old clock frequency detected, modifying and rebooting -> Since the BOOT Flash chip does not allow its clock frequency to be changed, this "change, reboot, start, clock change, reboot" process will continue as long as the UNC is powered. Product Line TAC IA Series Environment UNC-410 with serial number 36218b and newer UNC-520 with serial number 36184b and newer Cause The UNC features a new bootflash chip that is incompatible with the older releases of the R2 software. When the affected UNCs are shipped by the factory the following yellow warning label is attached to the transformer cover: WARNING! This Universal Network Controller (UNC) features a new bootflash chip. To properly configure or install the software you MUST use one of the following releases. Release R2.301.527 or newer Failure to follow these instructions will cause the UNC to become inoperative. Refer to Product Announcement PA-00123 on The Exchange Download Center for more details. Resolution Contact Product Support to obtain a return authorization number. The UNC returned with this issue will be repaired by the factory for a nominal fee of $250.
View full article
Picard Product_Support
‎2018-09-07 02:07 PM

Labels:
  • TAC IA Series
1587 Views

How to setup Keypad so that it logs out the User after a specified amount of time.

Issue User is not being logged out of the Keypad after the log out time. Product Line EcoStruxure Security Expert Environment Security Expert SX-KLCS (Security Expert Keypad) Cause The User had an access level with a Menu Group that had the "Installer Menu Group" option enabled. This option disables the logout time for the keypad so the user is not logged out while the system is being configured. Resolution 1. Disable the "Installer Menu Group" option under the associated Groups | Menu Groups | Option menu; 2. Which will allow the keypad to log the user out after the (10s) "Time User Is Logged In (Seconds)" [2] value set under Expanders | Keypads | Configuration.
View full article
Picard Product_Support
‎2018-09-10 09:43 AM

Last Updated: Administrator CraigEl Administrator ‎2022-08-14 05:16 PM

Labels:
  • EcoStruxure Security Expert
1581 Views

Change the Door Mode (EntryNormMode) from SiteMode+CardMode to PinMode in PE program

Issue Door needs to be set to SiteMode+CardMode for weekdays and SiteMode+CardMode+PinMode for weekends. Product Line Andover Continuum Environment NetController (CX9900 , CX9940 , CX9702 , ACCX5740 , ACX5730) InfinetController (ACX780, ACX781) Cause Customers want to have a PE program that changes the door mode to be more secured for the weekend according the building schedule. Resolution In the PE program ; Set DoorName\EntryNormMode = 3 ' Set the door mode to SiteMode+CardMode for weekdays Set DoorName\EntryNormMode = 11 ' Set the door mode to SiteMode+CardMode+PinMode for weekends. The values for the different modes are: 1 = SiteMode 2 = CardMode 4 = GeneralMode 8 = PINMode 16 = ScheduleMode The individual values are added together to obtain the desired mode. As seen above, setting the mode to 3 makes the door validate site + card. It is recommended that both the normal and nocomm modes be set (EntryNormMode, EntryNoCommMode) For a dual reader door, use the corresponding ExitNormMode and ExitNoCommMode.
View full article
Picard Product_Support
‎2018-09-06 01:15 PM

Last Updated: Administrator DavidFisher Administrator ‎2019-11-05 02:25 PM

Labels:
  • Andover Continuum
1572 Views

Analog Input Configuration Objects (AIC) Description

Issue Some AIC objects are pre-configured for the most common thermistors used in the industry. However in Building Expert, i cannot see the description of the AIC object until it has been added. Product Line EcoStruxure Building Expert Environment Multi Purpose Manager AIC Cause Feature not available on the MPM Resolution The description of each of the AIC object is available below. AIC1 to AIC10 = Empty (Manually configurable objects) AIC11 = 10kOhm Type II Thermistor Celsius AIC12 = 10kOhm Type II Thermistor Fahrenheit AIC13 = 10kOhm Type III Thermistor Celsius AIC14 = 10kOhm Type III Thermistor Fahrenheit AIC15 = 1 KOhms Pt1000 Platinum RTD Sensor Celsius AIC16 = 1 KOhms Pt1000 Platinum RTD Sensor Fahrenheit AIC17 = 100 Ohms Pt100 Platinum RTD Sensor Celsius AIC18 = 100 Ohms Pt100 Platinum RTD Sensor Fahrenheit AIC19 = 1 KOhms Ni1000 Nickel RTD Sensor Celsius AIC20 = 1 KOhms Ni1000 Nickel RTD Sensor Fahrenheit
View full article
Picard Product_Support
‎2018-09-10 02:21 PM

Last Updated: Sisko DavidChild Sisko ‎2021-09-13 12:56 AM

Labels:
  • EcoStruxure Building Expert
1597 Views

What is the procedure for redisplaying the WorkPlace Tech Tool stencils (Visio Shape Sheets) after accidentally or intentionally closing them?

Issue When using the I/A Series WorkPlace Tech Tool software to engineer an application for the I/A Series LON or BACnet controllers, the Visio Shape Sheets (stencils) containing the application drawing objects may be accidentally or intentionally closed.  Due to the names of the Visio Shape Sheet files containing the WPT stencils, it is very difficult to reopen the proper series of shape sheets without repeated trial-and-error.   When upgrading an application drawing from WPT 4.0 to WPT 5.7 or WPT 5.8, this same issue may occur.  Use this procedure to reattach the proper stencils to the upgraded application file. Environment I/A Series WorkPlace Tech Tool software version 4.0.0 or newer. Cause The names of the Visio Shape Sheet (Stencil) files for the WorkPlace Tech Tool are not self-explanatory, making it nearly impossible to manually select and reattach the correct stencils to the current application file.   Resolution Open the WPT Hardware Wizard, either from the Application Menu or by Right-Clicking on the drawing, and then click the Hardware Wizard [Finish] button.  The WPT Hardware Wizard will use the information pertaining to the selected controller type, model and version to insure that the application is configured properly and the necessary drawing stencils are available, redisplaying them on the left edge of the drawing
View full article
Picard Product_Support
‎2018-09-06 01:57 PM

Labels:
  • TAC IA Series
1590 Views

What's the part number for the voltage divider used on universal inputs on the MNL and MNB controllers?

Issue A 0-10 Vdc sensor will be wired into a MNL/MNB controller.  The voltage must be dropped from 0-10 Vdc to 0-5 Vdc to interface and read properly.  What's the part number for ordering the voltage divider used to achieve this task? Environment MNL controllers MNB controllers Cause Another manufacturer's sensor designed with a 0-10 Vdc control signal. Resolution The part number is AD-8961-220.  It is available for purchase via iPortal (http://iportal2.schneider-electric.com).  The controller analog input object would be set up as follows: Linput= 0.909 Vdc LScale= 0 Hinput= 4.545 Vdc HScale= 100  
View full article
Picard Product_Support
‎2018-09-10 03:35 AM

Labels:
  • TAC IA Series
1573 Views

Display an object's BACnet instance number within the object in the Niagara G3 Wire Sheet

Issue In Niagara G3, how do I display an object's BACnet instance number within the object in the Wire Sheet? Product Line TAC IA Series Environment Niagara G3 all versions Cause The BACnet object ID is not displayed by default. Resolution Steps for displaying the BACnet object’s ID in the object Right-click in the BACnet object you want to display its objectID and select Slot Sheet Right-click on the view pane and select "Add Slot". Enter "objectID" for Name For Type, select "bacnet" and "BacnetObjectIdentifier" from the drop-down menu and then select "OK" to save Expand the BACnet object in the tree and right-click its "Proxy Ent" and select "Link Mark" from the drop-down menu Right-click on the BACnet object in the tree and select "Link from "Proxy Ent"" Under Proxy Ext (Source) select "Object ID", and under the BACnet object (Target) select "ObjectID" and then click on the OK button The BACnet ObjectID will now be displayed in the object.
View full article
Picard Product_Support
‎2018-09-06 12:27 PM

Last Updated: Administrator DavidFisher Administrator ‎2019-08-08 07:40 AM

Labels:
  • TAC IA Series
1588 Views

DC1100 / DC1400 displays message "No Sensors Connected" or displays temperature values that are not representative of the temperatures being sensed...

Issue DC1100 / DC1400 displays message "No Sensors Connected" or displays temperature values that are not representative of the temperatures being sensed. Environment DC1100 / DC1400 Discrete Controller and associated Sensors. Cause DC1100 / DC1400 is connected to Sensors having incompatible resistance range. The resistance range of those Sensors compatible with the DC1100 differs to that of the resistance range of those Sensors compatible with the DC1400 "No Sensors Connected" message may appear in the Controller display as a result of the Sensors connected having a resistance that is out of range compared to that of the resistance range that the Controller Sensor Inputs are designed to recognise.  Resolution Check Sensor type connected to Controller. DC1100 Controller is compatible with the following Sensors :- STC 600 D Contact Temperature Sensor                               (Original Drayton Sensor = A704) STO 600 D Outside Air Temperature Sensor                          (Original Drayton Sensor = A702) STP 600 D Pipe Temperature Sensor                                     (Original Drayton Sensor = A703) STR 600 D Room Temperature Sensor                                  (Original Drayton Sensor = A701)   DC1400 Controller is compatible with the following Sensors :- STC 600 Contact Temperature Sensor                                   (Original Satchwell Sensor = DST) STO 600 Outside Air Temperature Sensor                             (Original Satchwell Sensor = DOT)    STP 660 Pipe Temperature Sensor                                        (Original Satchwell Sensor = DWT) STR 600 Room Temperature Sensor                                     (Original Satchwell Sensor = DRT)
View full article
Picard Product_Support
‎2018-09-06 02:06 PM

Labels:
  • Field Devices
1588 Views

"Read-only because object refers to a standard file" when editing Menta from Workstation

Issue "Read-only because object refers to a standard file" when trying to change Menta application (.mta file) from TAC workstation. "File object is locked by another application" when trying to edit a Menta application (.mta file) (file is not marked as reserved) Product Line TAC Vista Environment Menta Workstation Cause If a standard file is used, it is not possible to edit the "local" Menta file. Other causes are explained in "File object is locked by another application" error when trying to edit Menta file. Resolution If the referenced file is supposed to be a standard file, locate and edit the file referenced. If the referenced file is not supposed to be a standard file: Click on the more button under the MTA tab in the properties for the device. Uncheck "Reference standard file object.
View full article
Picard Product_Support
‎2018-09-07 06:57 AM

Last Updated: Administrator CraigEl Administrator ‎2022-08-10 01:44 AM

Labels:
  • TAC Vista
1587 Views

Acknowledge alarms from HMI

Issue Is it possible to acknowledge an alarm from a third party HMI? Environment StruxureWare Building Operation Cause The only way to acknowledge an alarm in SmartStruxure is via the Workstation/Webstation or EcoStruxure Web Service. Resolution The only way to acknowledge an alarm in SmartStruxure from a third party HMI is via EcoStruxure Web Service.
View full article
Picard Product_Support
‎2018-09-06 09:50 AM

Labels:
  • EcoStruxure Building Operation
1587 Views

SQL Server Systems Overview (I/NET Seven)

Issue Is there a document that describes the SQL terms used as part of I/NET Seven's File Equalization feature? Product Line TAC INET Environment I/NET Seven File Equalization SQL 2000 all editions Cause An overview of SQL Server Systems which can be used to describe the SQL terms used within I/NET Seven's File equalization feature.  Resolution SQL Server Systems Overview (I/NET Seven) A SQL Server system can be implemented as a client/server system or as a stand-alone desktop system. The type of system you design will depend on the number of users that will be accessing the database simultaneously and on the kind of work they will be doing. Generally the most typical environment is a tier 2 configuration consisting of a SQL Server 2000 software configured on a PC with client connections over ethernet Desktop System SQL Server can also be used as a stand-alone database server that runs on a desktop computer or a laptop. This is referred to as a desktop system. The client applications run on the same computer that stores the SQL Server engine and the databases. Only one computer is involved in this system. Therefore, no network connection is made from client to server; the client makes a local connection to its local installation of SQL Server. The desktop system is useful in cases in which a single user accesses the database or in which a few users who share a computer access the database at different times. For example, in a small store with one computer, several employees might access the store’s database to insert customer and sales information, but they must take turns accessing the system. Also, as in this example, desktop systems are useful when the database or databases are small. Designing a Microsoft SQL Server System Before you even begin loading the operating system and Microsoft SQL Server, you should have a good idea how you want to design your SQL Server system. By carefully designing your SQL Server system, you can avoid costly downtime caused by having to either rebuild the system or reinstall SQL Server with different options. By comparison, when we talk in general about a system, we mean all of the hardware and software that make up all of the computers working on behalf of the user in order to access data from one or more SQL Server databases. Security Today’s computing environments have differing security requirements. Windows 2000 allows you to customize the level of security to meet your needs. The following features assist you in securing both your computer and network access: Windows NT file system (NTFS) The Windows NT file system is the core security technology in Windows 2000. NTFS provides file security at a group or user level. Required for I/NET Seven Installations. Windows NT security model This feature permits only authorized users to access system resources. This model controls which user or users can access objects, such as files and printers, as well as which actions individuals can take on an object. In addition, you can enable auditing to track and monitor actions taken, as well as to track actions that a user attempted that would have violated a security policy. Encrypting file system (EFS) This feature encrypts files with a randomly generated key. The encryption and decryption processes are transparent to the user. EFS requires your disks to be formatted with NTFS. IP Security (IPSec) support IP Security helps protect data transmitted across a network. This is an integral component for providing security for virtual private networks (VPNs), which allow organizations to transmit data securely across the Internet. Figure 2-1 displays the IP Security configuration dialog box. Kerberos support This feature provides an industry-standard, highly secure authentication method for single-logon support for Windows 2000—based networks. The Kerberos protocol is an Internet standard and is highly effective when you are integrating Windows 2000 systems into an environment with a different operating system, such as UNIX. Microsoft SQL Server Replication: Overview and Snapshot Replication Microsoft SQL Server database replication technology is designed to help you distribute data and stored procedures among the servers in your company. Replication allows you to configure systems to copy data to other systems automatically. Using database replication, you can copy as much or as little data as you want, and you can allocate data among as many systems as you want. Because the replication process is automatic and because during replication a database stores data about the state of the replication as well as the replicated data, there is no danger in losing data. If a replication procedure is interrupted (say, due to a power failure), replication resumes from the point of failure as soon as the systems are running normally again. What Is Database Replication? Database replication is the act of copying, or replicating, data from one table or database to another table or database. Using this technology, you can distribute copies of an entire database to multiple systems throughout your company, or you can distribute selected pieces of the database. When SQL Server replication technology is used, the task of copying and distributing data is automated. No user intervention is needed to replicate data once replication has been set up and configured. Because the data replication and processing is done from within a SQL Server database, there is additional stability and recoverability. If a failure occurs during replication (or while any other SQL Server transaction is being performed), operations resume at the point of failure once the problem is fixed. Because of this, many people prefer replication to other methods of moving data between systems. Replication Components Microsoft SQL Server 2000 replication is based on the publish-and-subscribe metaphor first used to implement replication in SQL Server 6. This metaphor consists of three main concepts: publishers, distributors, and subscribers. A publisher is a database system that makes data available for replication. A distributor is the database system that contains the distribution database, or pseudodata, used to maintain and manage the replication. A subscriber is a database system that receives replicated data and stores the replicated database. Publishers The publisher consists of a Microsoft Windows system hosting a SQL Server database. This database provides data to be replicated to other systems. In addition, the SQL Server database keeps track of which data has changed so that it can be effectively replicated. The publisher also maintains information about which data is configured for replication. Depending on the type of replication that is chosen, the publisher does some work or little work during the replication process. This will be explained in further detail later in this chapter. A replicated environment can contain multiple subscribers, but any given set of data that is configured for replication, called an article, can have only one publisher. Having only one publisher for a particular set of data does not mean that the publisher is the only component that can modify the data—the subscriber can also modify and even republish the data. Distributors In addition to containing the distribution database, servers acting as distributors store metadata, history data, and other information. In many cases, the distributor is also responsible for distributing the replication data to subscribers. The publisher and the distributor are not required to be on the same server. In fact, you will likely use a dedicated server as a distributor. Each publisher must be assigned a distributor when it is created, and a publisher can have only one distributor. NOTE Metadata is data about data. Metadata is used in replication to keep track of the state of replication operations. It is also the data that is propagated by the distributor to other members of the replication set and includes information about the structure of data and the properties of data, such as the type of data in a column (numeric, text, and so on) or the length of a column. Subscribers As mentioned, subscribers are the database servers that store the replicated data and receive updates. Subscribers can also make updates and serve as publishers to other systems. For a subscriber to receive replicated data, it must subscribe to that data. Subscribing to replication involves configuring the subscriber to receive that data. A subscription is the database information to which you are subscribing. Types of Replication SQL Server offers three types of replication: snapshot, transactional, and merge. These replication types offer varying degrees of data consistency within the replicated database, and they require different levels of overhead. Snapshot Replication Snapshot replication is the simplest replication type. With snapshot replication, a picture, or snapshot, of the database is taken periodically and propagated to subscribers. The main advantage of snapshot replication is that it does not involve continuous overhead on publishers and subscribers. That is, it does not require publishers to continuously monitor data changes, and it doesn’t require the continuous transmission of data to subscribers. The main disadvantage is that the database on a subscriber is only as current as the last snapshot. In many cases, as you will see later in this chapter, snapshot replication is sufficient and appropriate—for example, when source data is modified only occasionally. Information such as phone lists, price lists, and item descriptions can easily be handled by using snapshot replication. Transactional Replication – ( Is not applicable to I/NET Seven) Transactional replication can be used to replicate changes to the database. With transactional replication, any changes made to articles (a set of data configured for replication) are immediately captured from the transaction log and propagated to the distributors. Using transactional replication, you can keep a publisher and its subscribers in almost exactly the same state, depending on how you configure the replication. Transactional replication should be used when it is important to keep all of the replicated systems current. Transactional replication uses more system overhead than snapshot replication because it individually applies each transaction that changes data in the system to the replicated systems. Merge Replication Merge replication is similar to transactional replication in that it keeps track of the changes made to articles. However, instead of individually propagating transactions that make changes, merge replication periodically transmits a batch of changes. Because merge replication transmits data in batches, it is also similar to snapshot replication. Merge replication differs from transactional replication in that it is inherently multidirectional. With merge replication, publishers and subscribers can update the publication equally. Transactional replication also allows subscribers to update the publication, but the two replication types function quite differently. Introduction to Merge Replication Merge replication performs multidirectional replication between the publisher and one or more subscribers. This allows multiple systems to have updatable copies of the publication and to modify their own copies. A modification on the publisher will be replicated to the subscribers. A modification on a subscriber will be replicated to the publisher and then replicated to the other subscribers. Unlike transactional replication, merge replication works by installing triggers on the publisher and on the subscribers. Whenever a change is made to the publication or a copy of it, the appropriate trigger is fired, which causes a replication command to be queued up to be sent to the distribution database. This command is eventually sent to the distribution database and then sent to participating systems. Because merge replication operates this way, it requires much more overhead, especially on the publisher, than does transactional replication. As you will learn in this chapter, the key components involved in the merge replication system are the Merge Agent and the distribution database. The Merge Agent reconciles (merges) incremental changes that have occurred since the last reconciliation. When you use merge replication, no Distribution Agent is used—the Merge Agent communicates with both the publisher and the distributor. The Snapshot Agent is used only to create the initial database. The Merge Agent performs the following tasks. The Merge Agent uploads all changes from the subscriber. All of the rows without a conflict (rows not modified on both the publisher and the subscriber) are uploaded immediately; those with a conflict (rows modified on both systems) are sent to the conflict resolver. The resolver is a module that is used to resolve conflicts in merge replication. You can configure this module to resolve conflicts based on your needs. All changes are applied to the publisher. The Merge Agent uploads all changes from the publisher. All of the rows without a conflict are uploaded immediately; those with a conflict are sent to the conflict resolver. All changes are applied to the subscriber. This process will repeat as scheduled. With push subscriptions, the Merge Agent runs on the distributor. With pull subscriptions, the Merge Agent runs on the subscriber. Each merge publication has its own Merge Agent. Publications A publication is a set of articles grouped together as a unit. Publications provide the means to replicate a logical grouping of articles as one replication object. For example, you can create a publication to be used to replicate a database consisting of multiple tables, each of which is defined as an article. It is more efficient to replicate a database by replicating the entire database in one publication than by replicating tables individually. A publication can consist of a single article, but it almost always contains more than one article. However, a subscriber can subscribe only to publications, not to articles. Therefore, if you want to subscribe to a single article, you must configure a publication that contains only that article and then subscribe to that publication. Push and Pull Subscriptions Replicated data can be propagated in a number of ways. All propagation methods are based on either push subscriptions or pull subscriptions. A subscriber can support a mixture of push and pull subscriptions simultaneously. Push Subscriptions The distributor is responsible for providing updates to the subscribers. Updates are initiated without any request from the subscriber. A push subscription is useful when centralized administration is desired because the distributor, rather than multiple subscribers, controls and administers replication. In other words, the initiation and the scheduling of the replication are handled on the distributor. Pull Subscriptions Pull subscriptions allow subscribers to initiate replication. Replication can be initiated either via a scheduled task or manually. Pull subscriptions are useful if you have a large number of subscribers and if the subscribers are not always attached to the network. Because subscribers initiate pull subscriptions, subscribers not always connected to the network can periodically connect and request replication data. This can also be useful in reducing the number of connection errors reported on the distributor. If the distributor tries to initiate replication to a subscriber that does not respond, an error will be reported. Thus, if the replication is initiated on the subscriber only when it is attached, no errors will be reported. Replication Agents Several agents are used to perform the actions necessary to move the replicated data from the publisher to the distributor and finally to the subscriber: the Snapshot Agent, the Log Reader Agent, the Distribution Agent, the Merge Agent, and the Queue Reader Agent. Snapshot Agent The Snapshot Agent is used for creating and propagating the snapshots from the publisher to the distributor (or snapshot location). The Snapshot Agent creates the replication data (the snapshot) and creates the information that is used by the Distribution Agent to propagate that data (the metadata). The Snapshot Agent stores the snapshot on the distributor (or anywhere that you specify). The Snapshot Agent is also responsible for maintaining information about the synchronization status of the replication objects; this information is stored in the distribution database. The Snapshot Agent is dormant most of the time and might periodically activate, based on the schedule that you have configured, and perform its tasks. Each time the Snapshot Agent runs, it performs the following tasks: The Snapshot Agent establishes a connection from the distributor to the publisher. If a connection is not available, the Snapshot Agent will not proceed with creating the snapshot. Once the connection has been established, the Snapshot Agent locks all of the articles involved in the replication to ensure that the snapshot is a consistent view of the data. The Snapshot Agent establishes a connection from the publisher to the distributor. Once this connection has been established, the Snapshot Agent engineers a copy of the schema for each article and stores that information in the distribution database. This data is considered metadata. The Snapshot Agent takes a snapshot of the actual data on the publisher and writes it to a file at the snapshot location. The snapshot location does not necessarily need to be on the distributor. If all systems involved in the replication are SQL Server systems, the file is stored as a native bulk copy program. If mixed types of systems are involved in the replication, the data is stored in text files. At this point, synchronization information is set by the Snapshot Agent. After the data has been copied, the Snapshot Agent updates information in the distribution database. The Snapshot Agent releases the locks that it has held on the articles and logs the snapshot into the history file. As you can see, the Snapshot Agent is responsible for only creating the snapshot; it does not distribute it to subscribers. Other agents perform this task. Distribution Agent The Distribution Agent propagates snapshots and transactions from the distribution database to subscribers. Each publication has its own Distribution Agent. Merge Agent The Merge Agent is used in merge replication to reconcile (merge) incremental changes that have occurred since the last reconciliation. When you use merge replication, the Distribution Agent and the Snapshot Agent aren’t used—the Merge Agent communicates with both the publisher and the distributor. Backing Up Microsoft SQL Server Backing up the database is one of the DBA’s most important tasks. Having backup files and carefully planning for disaster recovery enable the DBA to restore the system in the event of a failure. The DBA is responsible for keeping the system up and running as much as possible and, in the event of a system failure, for restoring service as quickly as possible. Downtime can be both inconvenient and extremely expensive. Getting the database back up and running as soon as possible is essential. Backup Terminology Before we look at backup techniques, let’s review some terminology. In this section, you’ll learn some basic facts about backup, restore, and recovery operations. Backup and Restore Backup and restore operations are related and involve saving data from the database for later use, similar to the backup and restore operations that can be performed by the operating system. During the backup, data is copied from the database and saved in another location. The difference between an operating system backup and a database backup is that the operating system backup can save individual files, whereas the database backup saves the entire database. Usually, a database is shared by many users, whereas many operating system files belong to individual users. Thus, a database backup backs up all of the user’s data at once. Because SQL Server is designed for maximum uptime, the backup process is designed to work while the database is up and running, and even while users are accessing the database. During the restore, the backed up data is copied back to the database. (Don’t confuse restore with recovery; these are two separate operations.) Unlike the backup process, the restore process cannot be done while SQL Server is up and running. In addition, a table cannot be restored separately. Recovery Recovery involves the ability of the relational database management system (RDBMS) to survive a system failure and replay (recover) transactions. Because of the delay in writing changes to disk, a system failure might leave the database in a corrupted state, because some changes made to the database might not have been written to disk or changes written to disk might not have been committed. To maintain the integrity of the database, SQL Server logs all changes in a transaction log. When SQL Server restarts after a system failure, it uses the transaction log to roll forward transactions that were committed but not written to disk and to roll back transactions that were not committed at the time of the failure. In this manner, data accuracy is guaranteed. SQL Server must be prepared to handle several types of transactions during recovery, including the following: Transactions that are queries only No recovery is necessary. Transactions that changed data in the database and were committed but were not written to disk During recovery, SQL Server reads the data pages from disk, reapplies the changes, and then rewrites the pages to disk. Transactions that changed data in the database, were committed, and were written to disk During recovery, SQL Server determines that the changes were written to disk. No other intervention is required. Transactions that changed data in the database and were not committed During recovery, SQL Server uses the transaction log to undo any changes that were made to data pages and restores the database to the state it was in before the transactions started. When SQL Server restarts from a system failure, the recovery mechanism starts automatically. The recovery mechanism uses the transaction log to determine which transactions need to be recovered and which do not. Many of the transactions will not need recovery, but SQL Server must read the transaction log to determine which transactions do require recovery. SQL Server starts reading the transaction log at the point where the last checkpoint occurred. System Failure You might be wondering whether backups are really necessary if you use technologies such as Microsoft Cluster Services and RAID fault tolerance. The answer is a resounding “yes.” Your system can fail in a number of ways, and those methods of fault tolerance and fault recovery will help keep your system functioning properly through only some of them. Some system failures can be mild; others can be devastating. To understand why backups are so important, you need to know about the three main categories of failures: hardware failures, software failures, and human error. Hardware Failures Hardware failures are probably the most common type of failure you will encounter. Although these failures are becoming less frequent as computer hardware becomes more reliable, components will still wear out over time. Typical hardware failures include the following: CPU, memory, or bus failure These failures usually result in a system crash. After you replace the faulty component and restart the system, SQL Server automatically performs a database recovery. The database itself is intact, so it does not need to be restored—SQL Server needs simply to replay the lost transactions. Disk failure If you’re using RAID fault tolerance, this failure type will probably not affect the state of the database at all. You must simply repair the RAID array. If you are not using RAID fault tolerance or if an entire RAID array fails, your only alternative is to restore the database from the backup and use the transaction log backups to recover the database. Catastrophic system failure or permanent loss of server If the entire system is destroyed in a fire or some other disaster, you might have to start over from scratch. The hardware will need to be reassembled, the database restored from the backup, and the database recovered by means of the data and transaction log backups. Software Failures Software failures are rare, and your system probably will never experience them. However, a software failure is usually more disastrous than a hardware failure because software has built-in features that minimize the effect of hardware failures, and without these protective features, the system is vulnerable to disaster if a hardware failure occurs. The transaction log is an example of a software feature designed to help systems recover from hardware failures. Typical software failures include the following: Operating system failure If a failure of this type occurs in the I/O subsystem, data on disk can be corrupted. If no database corruption occurs, only recovery is necessary. If database corruption occurs, your only option is to restore the database from a backup. RDBMS failure SQL Server itself can fail. If this type of failure causes corruption to occur, the database must be restored from a backup and recovered. If no corruption occurs, only the automatic recovery is needed to return the system to the state it was in at the point of failure. Application failure Applications can fail, which can cause data corruption. Like an RDBMS failure, if this type of failure causes corruption to occur, the database must be restored from a backup. If no corruption occurs, no restore is necessary; the automatic recovery will return the system to the state it was in at the point of failure. You might also need to obtain a patch from your application vendor to prevent this type of failure from recurring. Human Error The third main category of failure is human error. Human errors can occur at any time and without notice. They can be mild or severe. Unfortunately, these types of errors can go unnoticed for days or even weeks, which can make recovery more difficult. By establishing a good relationship (including good communication) with your users, you can help make recovery from user errors easier and faster. Users should not be afraid to come to you immediately to report a mistake. The earlier you find out about an error, the better. The following failures can be caused by human error: Database server loss Human errors that can cause the server to fail include accidentally shutting off the power or shutting down the server without first shutting down SQL Server. Recovery is automatic when SQL Server is restarted, but it might take some time. Because the database is intact on disk, a restore is not necessary. Data loss This type of loss can be caused by someone’s accidentally deleting a data file, for example, thus causing loss of the database. Restore and recovery operations must be performed to return the database to its prefailure state. Table loss or corrupted data If a table is dropped by mistake or its data is somehow incorrectly changed, you can use backup and recovery to return the table to its original state. Recovery from this type of failure can be quite complex because a single table or a small set of data that is lost cannot simply be recovered from a backup. Data Base Backups All SQL Server backups are performed for a specific database. To completely back up your system, you should back up all databases in the system and their transaction logs. Don’t forget to back up the master database as well. And remember, without good backups, you might not be able to restore your data in the event of a failure. Full Backups As mentioned, a full backup involves backing up an entire database. All of the filegroups and data files that are part of this database are backed up. If you have multiple databases, you should back up all of them. A full backup is probably the most common technique for backing up small- to medium-size databases. Depending on how large the databases are, this process can be quite time consuming, so if time is an issue, you might consider performing differential backups or filegroup backups, as described next. Once you start a backup, you cannot pause it—the backup will continue until the entire database is backed up. Conclusion It is very important to understand when using SQL Server 2000 full version that this application requires more specific knowledge in deployment and configuration of this software program.
View full article
Picard Product_Support
‎2018-09-06 10:44 PM

Last Updated: Administrator DavidFisher Administrator ‎2020-08-13 08:00 AM

Labels:
  • TAC INET
1566 Views

Pin configuration of RJ10 for TAC Xenta OP

Issue Pin configuration of the RJ10 for TAC Xenta OP Product Line TAC Vista Environment Xenta OP Cause The Xenta OP has two connection methods, the 4 pin terminal blocks and an RJ10 port. Some users may have the need for making custom RJ10 cables. Resolution The pin order for the RJ10 port is the exact same as the order on the terminal block right next to it. From right to left they are: C1 (comms 1) C2 (comms 2) G+ (24 VAC power) G0 (24 VAC ground)
View full article
Picard Product_Support
‎2018-09-06 03:10 PM

Last Updated: Janeway PeterEdvik Janeway ‎2019-05-31 05:23 AM

Labels:
  • TAC Vista
1586 Views

How can the resource count be found for a Niagara R2 station?

Issue How can the resource count be found for a Niagara R2 station? Environment Niagara R2 station Cause User can't locate the Resource counts using Workplace Pro or a web browser. Resolution There are two methods. Using Workplace Pro, Open a station (Note:  the station must be running)  Highlight the station name Select Search > Resource Count in the drop-down menu bar (alt-R shortcut) Using a web browser, In the address bar, type http://[stationHost]/prism/resources. Sign into the station using an Admin user account. This will display a summary at any level of the database. For each level there is a "Total objects" listing at the top of the page. This is where the resource count is displayed for that level.
View full article
Picard Product_Support
‎2018-09-07 01:58 AM

Labels:
  • TAC IA Series
1567 Views

BACnet/MSTP communication to Viconics VT thermostats is intermittent.

Issue BACnet/MSTP communication to Viconics VT thermostats is intermittent. Product Line Field Devices Environment SE7000 Room Controller Cause Possible cause is physical layer and device addresses. Resolution Verify the physical layer (i.e. wire, end of line resistors and bias resisters) is installed as recommended.  A common wiring error is landing the trunk cable shield to the "REF" terminal at the thermostat. The shield should be carried through but not landed at the thermostat and the trunk shield should be grounded in one place. Duplicate device addresses can cause this type of issue as well. Verify each controller has a unique "Com addr" which is part of its instance number. This can be viewed and modified from the display panel of the thermostat. To enter the configuration parameter menu: VT7200 series press and hold the button(OVERRIDE) for 8 seconds VT7300 series press and hold the middle button(°C/F or OVERRIDE)for 8 seconds. VT7600 series press and hold the button(MENU)for 8 seconds
View full article
Picard Product_Support
‎2018-09-06 02:44 PM

Last Updated: Administrator CraigEl Administrator ‎2023-03-28 09:20 PM

Labels:
  • Field Devices
1592 Views

Error: NS #162 lcaErrNsMcpNotFound

Issue Monitor point cannot be found. Environment Echelon LonMaker NL220 Vista LNS Cause When trying to resynchronize, you may get this error. Resolution  A monitor point is a member of a monitor set.  They are created when you right click on a SNVT in the LonMaker drawing and choose “Monitor.”  If a monitor point can no longer be found, then it can be cleaned out of the monitor set to eliminate this error. Navigate to LonMaker > Network Service Devices.  Click on the NSD Name for your network, which highlights the Monitor Sets button.  Click it. Expand the view to find Echelon LonMaker Monitor Sets.  These are the monitor points created manually by monitoring a point in the drawing.  They are optional and unnecessary.  Right click on Echelon LonMaker Monitor Sets and select Delete. Select "Done".  Select "Done".  LonMaker > Resynchronize.  Try it again.  This time check Sync drawing to database (fix-up drawing).  Also, check Sync monitor sets between drawing and database. It should resynchronize successfully now.  In the future instead of right clicking on a SNVT binding and selecting monitor value, use get value.  This will momentarily show you the LON value without creating monitor sets, which can later cause this error.
View full article
Picard Product_Support
‎2018-09-06 02:22 PM

Labels:
  • TAC Vista
1596 Views

Controllers with network security applied are not responding to a ping command after firmware upgrade

Issue NC2 and ACX57xx controllers with network security applied will no longer respond to a ping command after upgrading to newest firmware. Product Line Andover Continuum Environment Network Security ACX5740 ACX5720 NC2 CX9680 Cause The ping command was used to monitor controllers offline/online status.  CX9680 and ACX57xx controllers (with network security applied) stopped responding to  a ping command after respective firmware versions  v2.100048 and v1.100052 were applied. Resolution This is functioning as designed due to the hardening of the controller's firmware for security reasons when network security has been applied.
View full article
Picard Product_Support
‎2018-09-06 07:36 AM

Last Updated: Sisko GavinHe Sisko ‎2023-05-16 08:34 AM

Labels:
  • Andover Continuum
1585 Views

Modbus IP X Driver supported data formats

Issue When using the Modbus TCP/IP XDriver will the data type used by a 3rd party system be supported. Product Line Andover Continuum Environment Modbus TCP X-Driver Cause Not every possible format of data is included in the Modbus IP X Driver. Resolution The following data formats are supported in version ModbusTCPIP_2ndGen200018.xdr: Unsigned Integer 16 bit Signed Integer 16 bit Reverse Floating Point 32 bit Floating point 32 bit Long Integer Unsigned 32 bit Long Integer Unsigned Swapped 32 bit If the products used have a different data format then this would require a new development of the X Driver, please see Will Continuum communicate with a third party system? for contact details. Also ensure that the X Driver point type created (Numeric, Input, Output) is supported by the function code used. Check out the X Driver manual for the list of point types supported by each function code. For example Function code 3 can be used with Numerics and Outputs, but not Inputs.
View full article
Picard Product_Support
‎2018-09-06 12:25 PM

Last Updated: Administrator CraigEl Administrator ‎2022-08-09 07:28 PM

Labels:
  • Andover Continuum
1592 Views
  • « Previous
    • 1
    • …
    • 187
    • 188
    • 189
    • …
    • 508
  • Next »
To The Top!

Forums

  • APC UPS Data Center Backup Solutions
  • EcoStruxure IT
  • EcoStruxure Geo SCADA Expert
  • Metering & Power Quality
  • Schneider Electric Wiser

Knowledge Center

Events & webinars

Ideas

Blogs

Get Started

  • Ask the Community
  • Community Guidelines
  • Community User Guide
  • How-To & Best Practice
  • Experts Leaderboard
  • Contact Support
Brand-Logo
Subscribing is a smart move!
You can subscribe to this board after you log in or create your free account.
Forum-Icon

Create your free account or log in to subscribe to the board - and gain access to more than 10,000+ support articles along with insights from experts and peers.

Register today for FREE

Register Now

Already have an account? Login

Terms & Conditions Privacy Notice Change your Cookie Settings © 2025 Schneider Electric

This is a heading

With achievable small steps, users progress and continually feel satisfaction in task accomplishment.

Usetiful Onboarding Checklist remembers the progress of every user, allowing them to take bite-sized journeys and continue where they left.

of