Your network consists of a single Active Directory domain. The network contains 20 file servers that run Windows Server 2008 R2. Each file server contains two volumes. One volume contains the operating system.
The other volume contains all data files.
You need to plan a recovery strategy that meets the following requirements:
-> Allows the operating system to be restored
-> Allows the data files to be restored
-> Ensures business continuity
-> Minimizes the amount of time to restore the server
What should you include in your plan?
Answer : D
Explanation:
MCITP Self-Paced Training Kit Exam 70-646 Windows Server Administration:
Windows Server Backup Windows Server Backup provides a reliable method of backing up and recovering the operating system, certain applications, and files and folders stored on your server. This feature replaces the previous backup feature that was available with earlier versions of Windows.
Windows Server Backup -
The Windows Server Backup tool is significantly different from ntbackup.exe, the tool included in Windows Server 2000 and Windows Server 2003. Administrators familiar with the previous tool should study the capabilities and limitations of the new Windows Server
Backup utility because many aspects of the tools functionality have changed.
Exam Tip: What the tool does -
The Windows Server 2008 exams are likely to focus on the differences between
NTBACKUP and Windows Server Backup.
The key points to remember about backup in Windows Server 2008 are:
Windows Server Backup cannot write to tape drives.
You cannot write to network locations or optical media during a scheduled backup.
The smallest object that you can back up using Windows Server Backup is a volume.
Only local NTFS-formatted volumes can be backed up.
Windows Server Backup files write their output as VHD (Virtual Hard Disk) files. VHD files can be mounted with the appropriate software and read, either directly or through virtual machine software such as Hyper-V.
MORE INFO Recovering NTbackup backups
You cannot recover backups written using ntbackup.exe. A special read-only version of ntbackup.exe that is compatible with Windows Server 2008 can be downloaded from http://go.microsoft.com/fwlink/?LinkId=82917.
Windows Server Backup is not installed by default on Windows Server 2008 and must be installed as a feature using the Add Features item under the Features node of the Server
Manager console. When installed, the Windows Server Backup node becomes available under the Storage node of the Server Manager Console. You can also open the Windows
Server Backup console from the Administrative Tools menu. The wbadmin.exe command-line utility, also installed during this process, is covered in The wbadmin
Command-Line Tool later in this lesson. To use Windows Server Backup or wbadmin to schedule backups, the computer requires an extra internal or external disk. External disks will need to be either USB 2.0 or IEEE 1394 compatible. When planning the deployment of disks to host scheduled backup data, you should ensure that the volume is capable of holding at least 2.5 times the amount of data that you want to back up. When planning deployment of disks for scheduled backup, you should monitor how well this size works and what sort of data retention it allows in a trial before deciding on a disk size for wider deployment throughout your organization.
When you configure your first scheduled backup, the disk that will host backup data will be hidden from Windows Explorer. If the disk currently hosts volumes and data,
You are designing a monitoring solution to log performance for servers that run Windows
Server 2008 R2.
The monitoring solution must allow members of the Performance Log Users group to create and modify Data Collector Sets.
You need to grant members of the Performance Log Users group the necessary permissions.
Which User Rights Assignment policy should you configure?
To answer, select the appropriate User Rights Assignment policy in the answer area.
Answer :
Explanation:
Your network consists of a single Active Directory domain. Your network contains 10 servers and 500 client computers. All domain controllers run Windows Server 2008 R2.
A Windows Server 2008 R2 server has Remote Desktop Services installed. All client computers run Windows XP Service Pack 3.
You plan to deploy a new line of business Application. The Application requires desktop themes to be enabled.
You need to recommend a deployment strategy that meets the following requirements:
-> Only authorized users must be allowed to access the Application.
-> Authorized users must be able to access the Application from any client computer.
-> Your strategy must minimize changes to the client computers.
-> Your strategy must minimize software costs.
What should you recommend?
Answer : D
Explanation:
Desktop Experience -
Configuring a Windows Server 2008 server as a terminal server lets you use Remote
Desktop Connection 6.0 to connect to a remote computer from your administrator workstation and reproduces on your computer the desktop that exists on the remote computer. When you install Desktop Experience on Windows Server 2008, you can use
Windows Vista features such as Windows Media Player, desktop themes, and photo management within the remote connection.
Your network consists of a single Active Directory domain. All domain controllers run
Windows Server 2008 R2. There are five servers that run Windows Server 2003 SP2. The
Windows Server 2003 SP2 servers have the Terminal Server component installed. A firewall server runs Microsoft Internet Security and Acceleration (ISA) Server 2006. All client computers run Windows 7.
You plan to give remote users access to the Remote Desktop Services servers.
You need to create a remote access strategy for the Remote Desktop Services servers that meets the following requirements:
-> Minimizes the number of open ports on the firewall server
-> Encrypts all remote connections to the Remote Desktop Services servers
-> Prevents network access to client computers that have Windows Firewall disabled
What should you do?
Answer : B
Explanation:
Terminal Services Gateway -
TS Gateway allows Internet clients secure, encrypted access to Terminal Servers behind your organizations firewall without having to deploy a Virtual Private Network (VPN) solution. This means that you can have users interacting with their corporate desktop or applications from the comfort of their homes without the problems that occur when VPNs are configured to run over multiple Network Address Translation (NAT) gateways and the firewalls of multiple vendors.
TS Gateway works using RDP over Secure Hypertext Transfer Protocol (HTTPS), which is the same protocol used by Microsoft Office Outlook 2007 to access corporate Exchange
Server 2007 Client Access Servers over the Internet. TS Gateway Servers can be configured with connection authorization policies and resource authorization policies as a way of differentiating access to Terminal Servers and network resources.
Connection authorization policies allow access based on a set of conditions specified by the administrator; resource authorization policies grant access to specific Terminal Server resources based on user account properties.
Network Access Protection -
You deploy Network Access Protection on your network as a method of ensuring that computers accessing important resources meet certain client health benchmarks. These benchmarks include (but are not limited to) having the most recent updates applied, having antivirus and anti-spyware software up to date, and having important security technologies such as Windows Firewall configured and functional. In this lesson, you will learn how to plan and deploy an appropriate network access protection infrastructure and enforcement method for your organization.
Your network consists of a single Active Directory domain. The network contains two
Windows Server 2008 R2 computers named Server1 and Server2. The company has two identical print devices. You plan to deploy print services.
You need to plan a print services infrastructure to meet the following requirements:
-> Manage the print queue from a central location.
-> Make the print services available, even if one of the print devices fails.
What should you include in your plan?
Answer : A
Explanation:
http://www.techrepublic.com/blog/datacenter/configure-printer-pooling-in-windows-server-
2008/964
Managing printers can be the bane of a Windows administrator. One feature that may assist you with this task is the Windows printer pooling feature. Windows Server 2008 offers functionality that permits a collection of multiple like-configured printers to distribute the print workload.
Printer pooling makes one share that clients print to, and the jobs are sent to the first available printer. Configuring print pooling is rather straightforward in the Windows printer configuration applet of the Control Panel. Figure A shows two like-modeled printers being pooled.
To use pooling, the printer models need to be the same so that the driver configuration is transparent to the end device; this can also help control costs of toner and other supplies.
But plan accordingly you dont want users essentially running track to look for their print jobs on every printer in the office.
Your company has several branch offices.
Your network consists of a single Active Directory domain. Each branch office contains domain controllers and member servers. The domain controllers run Windows Server 2003
SP2. The member servers run Windows Server 2008 R2.
Physical security of the servers at the branch offices is a concern.
You plan to implement Windows BitLocker Drive Encryption (BitLocker) on the member servers.
You need to ensure that you can access the BitLocker volume if the BitLocker keys are corrupted on the member servers. The recovery information must be stored in a central location.
What should you do?
Answer : B
Explanation:
MCITP Self-Paced Training Kit Exam 70-646 Windows Server Administration:
Planning BitLocker Deployment -
Windows BitLocker and Drive Encryption (BitLocker) is a feature that debuted in Windows
Vista Enterprise and Ultimate Editions and is available in all versions of Windows Server
2008. BitLocker serves two purposes:
protecting server data through full volume encryption and providing an integrity-checking mechanism to ensure that the boot environment has not been tampered with.
Encrypting the entire operating system and data volumes means that not only are the operating system and data protected, but so are paging files, applications, and application configuration data. In the event that a server is stolen or a hard disk drive removed from a server by third parties for their own nefarious purposes, BitLocker ensures that these third parties cannot recover any useful data. The drawback is that if the BitLocker keys for a server are lost and the boot environment is compromised, the data stored on that server will be unrecoverable.
To support integrity checking, BitLocker requires a computer to have a chip capable of supporting the Trusted Platform Module (TPM) 1.2 or later standard. A computer must also have a BIOS that supports the TPM standard. When BitLocker is implemented in these conditions and in the event that the condition of a startup component has changed,
BitLocker-protected volumes are locked and cannot be unlocked unless the person doing the unlocking has the correct digital keys. Protected startup components include the BIOS,
Master Boot Record, Boot Sector, Boot Manager, and Windows Loader.
From a systems administration perspective, it is important to disable BitLocker during maintenance periods when any of these components are being altered. For example, you must disable BitLocker during a BIOS upgrade. If you do not, the next time the computer starts, BitLocker will lock the volumes and you will need to initiate the recovery process.
The recovery process involves entering a 48-character password that is generated and saved to a specified location when running the BitLocker setup wizard. This password should be stored securely because without it the recovery process cannot occur. You can also configure BitLocker to save recovery data directly to Active Directory; this is the recommended management method in enterprise environments.
You can also implement BitLocker without a TPM chip. When implemented in this manner there is no startup integrity check. A key is stored on a removable USB memory device, which must be present and supported by the computers BIOS each time the computer starts up. After the computer has successfully started, the removable USB memory device can be removed and should then be stored in a secure location. Configuring a computer running Windows Server 2008 to use a removable USB memory device as a BitLocker startup key is covered in the second practice at the end of this lesson.
BitLocker Group Poli
Your network consists of a single Active Directory domain. The network includes a branch office named Branch1. Branch1 contains 50 member servers that run Windows Server
2008 R2. An organizational unit (OU) named Branch1Servers contains the computer objects for the servers in Branch1. A global group named Branch1admins contains the user accounts for the administrators. Administrators maintain all member servers in Branch1.
You need to recommend a solution that allows the members of Branch1admins group to perform the following tasks on the Branch1 member servers.
-> Stop and start services
-> Change registry settings
What should you recommend?
Answer : B
Explanation:
Local admins have these rights.
Power Users do not -
By default, members of the power users group have no more user rights or permissions than a standard user account. The Power Users group in previous versions of Windows was designed to give users specific administrator rights and permissions to perform common system tasks. In this version of Windows, standard user accounts inherently have the ability to perform most common configuration tasks, such as changing time zones. For legacy applications that require the same Power User rights and permissions that were present in previous versions of Windows, administrators can apply a security template that enables the Power Users group to assume the same rights and permissions that were present in previous versions of Windows.
Your network contains a Windows Server 2008 R2 server that functions as a file server. All users have laptop computers that run Windows 7.
The network is not connected to the Internet.
Users save files to a shared folder on the server.
You need to design a data provisioning solution that meets the following requirements:
-> Users who are not connected to the corporate network must be able to access the files and the folders in the corporate network.
-> Unauthorized users must not have access to the cached files and folders.
What should you do?
Answer : D
Explanation:
MCITP Self-Paced Training Kit Exam 70-646 Windows Server Administration:
Lesson 2: Provisioning Data -
Lesson 1 in this chapter introduced the Share And Storage Management tool, which gives you access to the Provision Storage Wizard and the Provision A Shared Folder Wizard.
These tools allow you to configure storage on the volumes accessed by your server and to set up shares. When you add the Distributed File System (DFS) role service to the File
Services server role you can create a DFS Namespace and go on to configure DFSR.
Provisioning data ensures that user files are available and remain available even if a server fails or a WAN link goes down. Provisioning data also ensures that users canwork on important files when they are not connected to the corporate network.
In a well-designed data provisioning scheme, users should not need to know the network path to their files, or from which server they are downloading them. Even large files should typically download quicklyfiles should not be downloaded or saved across a WAN link when they are available from a local server. You need to configure indexing so that users can find information quickly and easily. Offline files need to be synchronized quickly and efficiently, and whenever possible without user intervention. A user should always be working with the most up-to-date information (except when a shadow copy is specified) and fast and efficient replication should ensure that where several copies of a file exist on a network they contain the same information and latency is minimized.
You have several tools that you use to configure shares and offline files, configure storage, audit file access, prevent inappropriate access, prevent users from using excessive disk resource, and implement disaster recovery. However, the main tool for provisioning storage and implementing a shared folder structure is DFS Management, specifically DFS
Namespaces. The main tool for implementing shared folder replication in a
Windows Server 2008 network is DFS Replication.
A company runs a third-party DHCP Application on a windows Server 2008 R2 server. The
Application runs as a service that launches a background process upon startup.
The company plans to migrate the DHCP Application to a Windows Server 2008 R2 failover cluster.
You need to provide high availability for the DHCP Application.
Which service or Application should you configure?
To answer, select the appropriate service or Application in the answer area.
Answer :
Explanation:
Windows Server 2008 (and R2) Failover Clustering supports virtually every workload which comes with Windows Server, however there are many custom and 3rd party applications which take advantage of our infrastructure to provide high-availability. Additionally there are some applications which were not originally designed to run in a failover cluster. These can be created, managed by and integrated with Failover Clustering using a generic container, with applications using the Generic Application resource type.
We use the Generic Application resource type to enable such applications to run in a highly-available environment which can benefit from clustering features (i.e. high availability, failover, etc.).
When a generic application resource is online, it means that the application is running.
When a generic application is offline, it means that the application is not running. http://blogs.msdn.com/b/clustering/archive/2009/04/10/9542115.aspx
A cluster-unaware application is distinguished by the following features.
The application does not use the Failover Cluster API. Therefore, it cannot discover information about the cluster environment, interact with cluster objects, detect that it is running in a cluster, or change its behavior between clustered and non-clustered systems.
If the application is managed as a cluster resource, it is managed as a Generic Application resource type or Generic Service resource type. These resource types provide very basic routines for failure detection and application shutdown. Therefore, a cluster-unaware application might not be able to perform the initialization and cleanup tasks needed for it to be consistently available in the cluster.
Most older applications are cluster-unaware. However, a cluster-unaware application can be made clusteraware by creating resource types to manage the application. A custom resource type provides the initialization, cleanup, and management routines specific to the needs of the application.
There is nothing inherently wrong with cluster-unaware applications. As long as they are functioning and highly available to cluster resources when managed as Generic
Applications or Generic Services, there is no need to make them cluster-aware. However, if an application does not start, stop, or failover consistently when managed by the generic types, it should be made cluster-aware.
Your network consists of a single Active Directory forest. The forest contains one Active
Directory domain. The domain contains eight domain controllers. The domain controllers run Windows Server 2003 Service Pack 2.
You upgrade one of the domain controllers to Windows Server 2008 R2.
You need to recommend an Active Directory recovery strategy that supports the recovery of deleted objects.
The solution must allow deleted objects to be recovered for up to one year after the date of deletion.
What should you recommend?
Answer : A
Explanation:
The tombstone lifetime must be substantially longer than the expected replication latency between the domain controllers. The interval between cycles of deleting tombstones must be at least as long as the maximum replication propagation delay across the forest.
Because the expiration of a tombstone lifetime is based on the time when an object was deleted logically, rather than on the time when a particular server received that tombstone through replication, an object's tombstone is collected as garbage on all servers at approximately the same time. If the tombstone has not yet replicated to a particular domain controller, that DC never records the deletion. This is the reason why you cannot restore a domain controller from a backup that is older than the tombstone lifetime
By default, the Active Directory tombstone lifetime is sixty days. This value can be changed if necessary. To change this value, the tombstoneLifetime attribute of the CN=Directory
Service object in the configuration partition must be modified.
This is related to server 2003 but should still be relelvant http://www.petri.co.il/ changing_the_tombstone_lifetime_windows_ad.htm
Authoritative Restore -
When a nonauthoritative restore is performed, objects deleted after the backup was taken will again be deleted when the restored DC replicates with other servers in the domain. On every other DC the object is marked as deleted so that when replication occurs the local copy of the object will also be marked as deleted. The authoritative restore process marks the deleted object in such a way that when replication occurs, the object is restored to active status across the domain. It is important to remember that when an object is deleted it is not instantly removed from Active Directory, but gains an attribute that marks it as deleted until the tombstone lifetime is reached and the object is removed. The tombstone lifetime is the amount of time a deleted object remains in Active Directory and has a default value of 180 days.
To ensure that the Active Directory database is not updated before the authoritative restore takes place, you use the Directory Services Restore Mode (DSRM) when performing the authoritative restore process. DSRM allows the administrator to perform the necessary restorations and mark the objects as restored before rebooting the DC and allowing those changes to replicate out to other DCs in the domain.
Your company has two branch offices that connect by using a WAN link. Each office contains a server that runs Windows Server 2008 R2 and that functions as a file server.
Users in each office store data on the local file server. Users have access to data from the other office.
You need to plan a data access solution that meets the following requirements:
-> Folders that are stored on the file servers must be available to users in both offices.
-> Network bandwidth usage between offices must be minimized.
-> Users must be able to access all files in the event that a WAN link fails.
What should you include in your plan?
Answer : A
Explanation:
MCITP Self-Paced Training Kit Exam 70-646 Windows Server Administration:
DFS Replication provides a multimaster replication engine that lets you synchronize folders on multiple servers across local or WAN connections. It uses the Remote Differential
Compression (RDC) protocol to update only those files that have changed since the last replication. You can use DFS Replication in conjunction with DFS Namespaces or by itself.
File Replication Service (FRS) The File Replication Service (FRS) enables you to synchronize folders with file servers that use FRS. Where possible you should use the DFS
Replication (DFSR) service. You should install FRS only if your Windows Server 2008 server needs to synchronize folders with servers that use FRS with the Windows Server
2003 or Windows 2000 Server implementations of DFS.
The main tool for implementing shared folder replication in a Windows Server 2008 network is DFS Replication.
Using DFS Namespace to Plan and Implement a Shared Folder Structure and Enhance
Data Availability -
When you add the DFS Management role service to the Windows Server 2008 File
Services Server role, the DFS Management console is available from the Administrative
Tools menu or from within Server Manager. This console provides the DFS Namespaces and DFS Replication tools as shown in Figure 6-31 DFS Namespaces lets you group shared folders that are located on different servers into one or more logically structured namespaces. Each namespace appears to users as a single shared folder with a series of subfolders.
This structure increases availability. You can use the efficient, multiple-master replication engine provided by DFSR to replicate a DFS Namespace within a site and across WAN links. A user connecting to files within the shared folder structures contained in the DFS
Namespace will automatically connect to shared folders in the same AD DS site (when available) rather than across a WAN. You can have several DFS Namespace servers in a site and spread over several sites, so if one server goes down, a user can still access files within the shared folder structure.
Because DFSR is multimaster, a change to a file in the DFS Namespace on any DFS
Namespace server is quickly and efficiently replicated to all other DFS Namespace servers that hold that namespace. Note that DFSR replaces the File Replication Service (FRS) as the replication engine for DFS Namespaces, as well as for replicating the AD DS SYSVOL folder in domains that use the Windows Server 2008 domain functional level. You can install FRS Replication as part of the Windows Server 2003 File Services role service, but you should use it only if you need to synchronize with servers that use FRS with the
Windows Server 2003 or Windows 2000 Server implementations of DFS.
A company has its main office in New York and branch offices in Miami and Quebec. All sites are connected by reliable WAN links.
You are designing a Windows Server Update Services (WSUS) deployment strategy. The deployment strategy must meet the following requirements:
-> Download updates from Windows Update only in the New York office.
-> Ensure that the update language can be specified for the Quebec office.
You need to design a deployment strategy that meets the requirements.
How should you configure the servers and hierarchy types?
To answer, drag the appropriate server types and hierarchy types from the list to the correct location or locations in the answer area.
Answer :
Explanation:
Your network consists of a single Active Directory site that includes two network segments.
The network segments connect by using a router that is RFC 1542 compliant.
You plan to use Windows Deployment Services (WDS) to deploy Windows Server 2008 R2 servers. All new servers support PreBoot Execution Environment (PXE).
You need to design a deployment strategy to meet the following requirements:
-> Support Windows Server?2008 R2
-> Deploy the servers by using WDS in both network segments
-> Minimize the number of servers used to support WDS
What should you include in your design?
Answer : A
Explanation:
http://support.microsoft.com/kb/926172
IP Helper table updates -
The PXE network boot method uses DHCP packets for communication. The DHCP packets serve a dual purpose. They are intended to help the client in obtaining an IP address lease from a DHCP server and to locate a valid network boot server. If the booting client, the
DHCP server, and the network boot server are all located on the same network segment, usually no additional configuration is necessary. The DHCP broadcasts from the client reach both the DHCP server and the network boot server.
However, if either the DHCP server or the network boot server are on a different network segment than the client, or if they are on the same network segment but the network is controlled by a switch or a router, you may have to update the routing tables for the networking equipment in order to make sure that DHCP traffic is directed correctly.
Such a process is known as performing IP Helper table updates. When you perform this process, you must configure the networking equipment so that all DHCP broadcasts from the client computer are directed to both a valid DHCP server and to a valid network boot server.
Note: It is inefficient to rebroadcast the DHCP packets onto other network segments. It is best to only forward the DHCP packets to the recipients that are listed in the IP Helper table.
After the client computer has obtained an IP address, it contacts the network boot server directly in order to obtain the name and the path of the network boot file to download.
Again, this process is handled by using DHCP packets.
Note: We recommend that you update the IP Helper tables in order to resolve scenarios in which the client computers and the network boot server are not located on the same network segment.
Your network contains a standalone root certification authority (CA). You have a server named Server1 that runs Windows Server 2008 R2. You issue a server certificate to
Server1. You deploy Secure Socket Tunneling Protocol (SSTP) on Server1.
You need to recommend a solution that allows external partner computers to access internal network resources by using SSTP.
What should you recommend?
Answer : B
Explanation:
Lesson 1: Configuring Active Directory Certificate Services
Certificate Authorities are becoming as integral to an organizations network infrastructure as domain controllers, DNS, and DHCP servers. You should spend at least as much time planning the deployment of Certificate Services in your organizations Active Directory environment as you spend planning the deployment of these other infrastructure servers. In this lesson, you will learn how certificate templates impact the issuance of digital certificates, how to configure certificates to be automatically assigned to users, and how to configure supporting technologies such as Online Responders and credential roaming.
Learning how to use these technologies will smooth the integration of certificates into your organizations Windows Server 2008 environment.
After this lesson, you will be able to:
Install and manage Active Directory Certificate Services.
Configure autoenrollment for certificates.
Configure credential roaming.
Configure an Online Responder for Certificate Services.
Estimated lesson time: 40 minutes
Types of Certificate Authority -
When planning the deployment of Certificate Services in your network environment, you must decide which type of Certificate Authority best meets your organizational requirements. There are four types of Certificate Authority (CA):
Enterprise Root -
Enterprise Subordinate -
Standalone Root -
Standalone Subordinate -
The type of CA you deploy depends on how certificates will be used in your environment and the state of the existing environment. You have to choose between an Enterprise or a
Standalone CA during the installation of the Certificate Services role, as shown in Figure
10-1. You cannot switch between any of the CA types after the
CA has been deployed.
Your network consists of a single Active Directory domain. The functional level of the domain is Windows Server 2008 R2. The domain contains 200 Windows Server 2008 R2 servers.
You need to plan a monitoring solution that meets the following requirements:
-> Sends a notification by email to the administrator if an Application error occurs on any of the servers
-> Uses the minimum amount of administrative effort
What should you include in your plan?
Answer : A
Explanation:
http://technet.microsoft.com/en-us/library/cc749183.aspx
http://technet.microsoft.com/en-us/library/cc748890.aspx
http://technet.microsoft.com/en-us/library/cc722010.aspx
Event Subscriptions -
Applies To: Windows 7, Windows Server 2008 R2, Windows Vista
Event Viewer enables you to view events on a single remote computer. However, troubleshooting an issue might require you to examine a set of events stored in multiple logs on multiple computers.
Windows Vista includes the ability to collect copies of events from multiple remote computers and store them locally. To specify which events to collect, you create an event subscription. Among other details, the subscription specifies exactly which events will be collected and in which log they will be stored locally. Once a subscription is active and events are being collected, you can view and manipulate these forwarded events as you would any other locally stored events.
Using the event collecting feature requires that you configure both the forwarding and the collecting computers. The functionality depends on the Windows Remote Management
(WinRM) service and the Windows Event Collector (Wecsvc) service. Both of these services must be running on computers participating in the forwarding and collecting process. To learn about the steps required to configure event collecting and forwarding computers, see Configure Computers to Forward and Collect Events.
Additional Considerations -
You can subscribe to receive events from an existing subscription on a remote computer.
Configure Computers to Forward and Collect Events
Applies To: Windows 7, Windows Server 2008 R2, Windows Vista
Before you can create a subscription to collect events on a computer, you must configure both the collecting computer collected (collector) and each computer from which events will be collected (source). Updated information about event subscriptions may be available online at Event Subscriptions.
To configure computers in a domain to forward and collect events
1. Log on to all collector and source computers. It is a best practice to use a domain account with administrative privileges.
2. On each source computer, type the following at an elevated command prompt:
Have any questions or issues ? Please dont hesitate to contact us