AutoIt Consulting https://www.autoitconsulting.com/site Windows Deployment and Scripting Specialists Sun, 04 Nov 2018 18:58:28 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.8 SCCM Package and Program Return Codes and Reboot Behaviour https://www.autoitconsulting.com/site/deployment/sccm-package-program-return-codes-reboot-behaviour/ Thu, 30 Aug 2018 11:08:06 +0000 https://www.autoitconsulting.com/site/?p=2082 Overview This post describes the ways that SCCM handles reboots for a Package & Program deployment. The behaviour depends on the Program configuration, deployment method, and return code. For example, a Package will perform differently when deployed in a Task Sequence than when it is deployed directly to a collection. This post will explain the […]

The post SCCM Package and Program Return Codes and Reboot Behaviour appeared first on AutoIt Consulting.

]]>
Overview

This post describes the ways that SCCM handles reboots for a Package & Program deployment. The behaviour depends on the Program configuration, deployment method, and return code. For example, a Package will perform differently when deployed in a Task Sequence than when it is deployed directly to a collection.

This post will explain the different Program configuration options and deployment types and detail the resulting reboot behaviour for all the various combinations. The results are not always what would be expected. In particular, deployment of a Program within a Task Sequence can be problematic and it is usually more predictable to use the Run Command Line step in this case.

Program Configuration

After creating a Program you can configure an action to perform after running.

SCCM Program After Running

The three options are:

  • No action required
  • Program Controls Restart
  • Configuration Manager restarts computer

This configuration interacts with the deployment type and program return code to change how SCCM handles a reboot scenario.

Deployment Types

Packages and Programs can either be deployed directly to a collection or through the Install Package step in a Task Sequence. They can also be deployed in a Task Sequence indirectly using the Run Command Line step.

SCCM Return Codes

The meaning of return codes for Package and Program deployments is defined in the SCCM site control file. Codes are assigned into three categories:

  • Success
  • Success and Reboot Required
  • Failure and Retry Required

In this post we will be using the codes 0 (success) and 3010 (reboot required) as examples. For a full list of other possible codes see the post “SCCM Package Success, Reboot and Retry Return Codes“.

Full OS – Package & Program

This scenario is when a Program is deployed to a machine running the full operating system and SCCM client.

After running: No action required
Return Code Result
0 No automatic reboot
3010 No automatic reboot

 

After running: Program Controls Restart
Return Code Result
0 Waits 60 seconds for program to reboot itself. If no reboot occurs will continue installing other Packages
3010 Waits 60 seconds for program to reboot itself. If no reboot occurs will continue installing other Packages

 

After running: Configuration Manager restarts computer
Return Code Result
0 Prompts for restart and gives 90 minute countdown
3010 Prompts for restart and gives 90 minute countdown

 

Full OS – Task Sequence – Install Package

This scenario is when a Task Sequence is deployed to a machine running the full operating system and SCCM client. The Program is deployed within the Task Sequence using the Install Package step.

After running: No action required
Return Code Result
0 No automatic reboot. Task Sequence continues
3010 Automatic reboot after 30 seconds unless program reboots itself. Task Sequences resumes after reboot

 

After running: Program Controls Restart
Return Code Result
0 Automatic reboot after 30 seconds unless program reboots itself. Task Sequences resumes after reboot
3010 Automatic reboot after 30 seconds unless program reboots itself. Task Sequences resumes after reboot

 

After running: Configuration Manager restarts computer
Return Code Result
0 Prompts for restart and gives 90 minute countdown
3010 Prompts for restart and gives 90 minute countdown

 

OSD – Task Sequence – Install Package

This scenario is when a Task Sequence is deployed to a machine running a bare metal Operating System Deployment. The Program is deployed within the Task Sequence using the Install Package step.

After running: No action required
Return Code Result
0 No automatic reboot. Task Sequence continues
3010 Automatic reboot after 30 seconds unless program reboots itself. Task Sequences resumes after reboot

 

After running: Program Controls Restart
Return Code Result
0 Automatic reboot after 30 seconds unless program reboots itself. Task Sequences resumes after reboot
3010 Automatic reboot after 30 seconds unless program reboots itself. Task Sequences resumes after reboot

 

After running: Configuration Manager restarts computer
Return Code Result
0 Automatic reboot after 30 seconds unless program reboots itself. Task Sequences resumes after reboot
3010 Automatic reboot after 30 seconds unless program reboots itself. Task Sequences resumes after reboot

 

Task Sequence – Run Command Line

This scenario is when a Task Sequence is deployed to a machine running a either a bare metal Operating System Deployment or within the full OS. The Program is deployed within the Task Sequence using the Run Command Line step rather than the Program defined on the Package.

Return Code Result
0 No automatic reboot. Task Sequence continues
3010 No automatic reboot. Task Sequence continues

 

The post SCCM Package and Program Return Codes and Reboot Behaviour appeared first on AutoIt Consulting.

]]>
SCCM Package Success, Reboot and Retry Return Codes https://www.autoitconsulting.com/site/deployment/sccm-package-success-reboot-retry-return-codes/ Thu, 30 Aug 2018 10:46:57 +0000 https://www.autoitconsulting.com/site/?p=2123 Overview This post describes the return codes that SCCM understands for Package and Program deployments and shows which codes are classed success, reboot, or retry. The meaning of return codes for Packages and Programs is defined in the SCCM site control file. Codes are assigned into three categories: Success Return Codes – The installation was […]

The post SCCM Package Success, Reboot and Retry Return Codes appeared first on AutoIt Consulting.

]]>
Overview

This post describes the return codes that SCCM understands for Package and Program deployments and shows which codes are classed success, reboot, or retry.

The meaning of return codes for Packages and Programs is defined in the SCCM site control file. Codes are assigned into three categories:

  • Success Return Codes – The installation was successful.
  • Reboot Return Codes – The installation was successful and a reboot is required. Note: 1604 and 1641 are actually errors but SCCM classes all reboot codes as a “success”.
  • Execution Failure Retry Error Codes – The installation failed but can be retried. For example, a Program deployment that returns error code 5 (access denied) will be automatically retried after 15 minutes.

The full list of codes defined by SCCM for Package and Program deployments are:

Site Control Return Codes
Type
Success Return Codes 0
Reboot Return Codes 1604, 1641, 3010, 3011
Execution Failure Retry Error Codes 4, 5, 8, 13, 14, 39, 51, 53, 54, 55, 59, 64, 65, 67, 70, 71, 85, 86, 87, 112, 128, 170, 267, 999, 1003, 1203, 1219, 1220, 1222, 1231, 1232, 1238, 1265, 1311, 1323, 1326, 1330, 1618, 1622, 2250

 

The post SCCM Package Success, Reboot and Retry Return Codes appeared first on AutoIt Consulting.

]]>
SCCM Create Task Sequence Media Wizard and Network Ports https://www.autoitconsulting.com/site/deployment/sccm-create-task-sequence-media-wizard-network-ports/ Tue, 28 Aug 2018 11:15:33 +0000 https://www.autoitconsulting.com/site/?p=2080 Overview This post describes the firewall ports that need to be opened between an SCCM console and SCCM servers in order to use the Create Task Sequence Media wizard. I recently had to run the Create Task Sequence Media wizard from an SCCM console installed in a secure environment. All the network ports as documented […]

The post SCCM Create Task Sequence Media Wizard and Network Ports appeared first on AutoIt Consulting.

]]>
Overview

This post describes the firewall ports that need to be opened between an SCCM console and SCCM servers in order to use the Create Task Sequence Media wizard. I recently had to run the Create Task Sequence Media wizard from an SCCM console installed in a secure environment. All the network ports as documented by Microsoft were open but the wizard didn’t work. This post describes the requirements of the process in more detail and shows the network ports that were missing from the Microsoft documentation.

Create Media Wizard Requirements

The official list of ports used by the SCCM console are listed at https://docs.microsoft.com/en-us/sccm/core/plan-design/hierarchy/ports. However, this list only covers the basic SCCM console operations and does not cover the Create Task Sequence Media process.

During the media creation process the console allows the choice of various Distribution Points (DPs) in order to download the content required for the specified task sequence. An SMB connection is made to the DP’s ContentLib$ network share. All content is downloaded from the DP via SMB and not http/https as might be expected. All other console requirements (RPC) are as documented by Microsoft.

Create Media Wizard Firewall Port List

The table below shows the full list of firewall ports that need to be opened between the SCCM console and the various SCCM server systems in order to run the Create Task Sequence Media wizard. The RPC connections to the SMS provider system would be to the primary Site Server or CAS.

Description UDP TCP
RPC (initial connection to WMI to locate provider system) 135
RPC Endpoint Mapper 135 135
RPC Dynamic Ports

(Windows Vista, Windows Server 2008, or later)

49152-65535
RPC Dynamic Ports

(Windows XP, Windows Server 2003)

1025-5000
SMB to SCCM Distribution Point

(For each DP where content needs to be accessed by the console)

137, 138 137, 139, 445

 

The post SCCM Create Task Sequence Media Wizard and Network Ports appeared first on AutoIt Consulting.

]]>
Single Instance WinForm App in C# with Mutex and Named Pipes https://www.autoitconsulting.com/site/development/single-instance-winform-app-csharp-mutex-named-pipes/ Wed, 04 Jul 2018 11:22:00 +0000 https://www.autoitconsulting.com/site/?p=2046 Overview This post shows how to create a single instance WinForm C# application using a Mutex. It also demonstrates how to send any object between instances with Named Pipes. While writing OSD Background I wanted an application that would only allow a single instance to run at any given time. In addition I wanted it […]

The post Single Instance WinForm App in C# with Mutex and Named Pipes appeared first on AutoIt Consulting.

]]>
Overview

This post shows how to create a single instance WinForm C# application using a Mutex. It also demonstrates how to send any object between instances with Named Pipes.

While writing OSD Background I wanted an application that would only allow a single instance to run at any given time. In addition I wanted it so that if another instance was executed with different parameters – or a quit request – then those parameters would be sent to the original instance so that it could process them. After sending the parameters the second instance would terminate.

The final solution I came up with uses a Mutex to handle the single instancing. The information I wanted to send between instances is wrapped into a ‘payload’ object that is serialized into XML and sent over a Named Pipe.

Downloads

A fully working example can be downloaded from GitHub at https://github.com/AutoItConsulting/examples-csharp/tree/master/MutexSingleInstanceAndNamedPipe
Download ZIP Icon @2x

On first run the application shows a simple WinForm application with an empty text box. When another attempt is made to run the application the command-line parameters are sent via a Named Pipe to the first instance and shown in the text box.

Single Instance Using a Mutex

Mutex is a simple object, you just pick a unique instance name and create a new object. A bool is returned that allows you to test if this mutex is owned by just the current process or if it is already in use. As long as the mutex is referenced other processes that request a new Mutex object will be able to see that another instance is running. In the code below I request the Mutex in the form Load event, and release it in the form Close event.

Note: It is critical to reference the Mutex object right at the end of your program’s execution rather than relying on the garbage collector to clean up after your program exits. Otherwise in a release build the compiler is clever enough to see that after your initial creation of the Mutex object you never use it again and will signal for it to be disposed before your program has exited. Of course, Mutex implements the IDisposable interface so it’s good practice to explicitly dispose it anyway. This explicit dispose in the Form Close event is enough to prevent the compiler from automatically disposing the Mutex earlier.

Here are the relevant functions:

public partial class FormMain : Form
{
    private const string MutexName = "MUTEX_SINGLEINSTANCEANDNAMEDPIPE";
    private bool _firstApplicationInstance;
    private Mutex _mutexApplication;

    private bool IsApplicationFirstInstance()
    {
        // Allow for multiple runs but only try and get the mutex once
        if (_mutexApplication == null)
        {
            _mutexApplication = new Mutex(true, MutexName, out _firstApplicationInstance);
        }

        return _firstApplicationInstance;
    }
    
    private void FormMain_Load(object sender, EventArgs e)
    {
        // First instance
        if (IsApplicationFirstInstance())
        {
            // Yes
            // Do something
        }
        else
        {
            // Close down
            Close();
        }
    }
    
    private void FormMain_FormClosed(object sender, FormClosedEventArgs e)
    {
        // Close and dispose our mutex.
        if (_mutexApplication != null)
        {
            _mutexApplication.Dispose();
        }
    }
}

Communication Payload

The ‘payload’ is the data I wanted to send between instances. In this example I used a custom object that has a copy of the command-line parameters used in the form of a List<string>. I am using the XmlSerializer so I’ve decorated the class with XML attributes. The BinaryFormatter (or any other serializer) could also be used in the same way. I chose XmlSerializer because in my OSD Background application I use it to send a copy of an XML options file.

public class NamedPipeXmlPayload
{
    /// <summary>
    ///     A list of command line arguments.
    /// </summary>
    [XmlElement("CommandLineArguments")]
    public List<string> CommandLineArguments { get; set; } = new List<string>();
}

Named Pipe Server

We create a single Named Pipe server which does the following.

  • Creates a new Named Pipe server and asynchronously waits for a client to connect ( NamedPipeCreateServer() )
  • On client connection reads payload sent from the client ( NamedPipeServerConnectionCallback() )
  • De-serializes the payload into our custom NamedPipeXmlPayload object ( NamedPipeServerConnectionCallback() )
  • Create a new Named Pipe server and repeats ( NamedPipeCreateServer() )

By default a Named Pipe can be accessed from the network as well as local machine. I don’t want that so I explicitly create a Named Pipe that denies network access.

Note: The NamedPipeServerConnectionCallback method takes place on a new thread. If you want to interact with the UI thread you will need to use the appropriate methods. In this example I update the text box in the UI thread using the Invoke functionality.

The code for all the relevant functions is below:

public partial class FormMain : Form
{
	private const string PipeName = "PIPE_SINGLEINSTANCEANDNAMEDPIPE";
	private readonly object _namedPiperServerThreadLock = new object();
	private NamedPipeServerStream _namedPipeServerStream;
	private NamedPipeXmlPayload _namedPipeXmlPayload;

	private void FormMain_FormClosed(object sender, FormClosedEventArgs e)
	{
		// Dispose the named pipe steam
		if (_namedPipeServerStream != null)
		{
			_namedPipeServerStream.Dispose();
		}
	}

	private void FormMain_Load(object sender, EventArgs e)
	{
		// If are the first instance then we start the named pipe server listening and allow the form to load
		if (IsApplicationFirstInstance())
		{
			// Create a new pipe - it will return immediately and async wait for connections
			NamedPipeServerCreateServer();
		}
	}

	/// <summary>
	///     Starts a new pipe server if one isn't already active.
	/// </summary>
	private void NamedPipeServerCreateServer()
	{
		// Create a new pipe accessible by local authenticated users, disallow network
		var sidNetworkService = new SecurityIdentifier(WellKnownSidType.NetworkServiceSid, null);
		var sidWorld = new SecurityIdentifier(WellKnownSidType.WorldSid, null);

		var pipeSecurity = new PipeSecurity();

		// Deny network access to the pipe
		var accessRule = new PipeAccessRule(sidNetworkService, PipeAccessRights.ReadWrite, AccessControlType.Deny);
		pipeSecurity.AddAccessRule(accessRule);

		// Alow Everyone to read/write
		accessRule = new PipeAccessRule(sidWorld, PipeAccessRights.ReadWrite, AccessControlType.Allow);
		pipeSecurity.AddAccessRule(accessRule);

		// Current user is the owner
		SecurityIdentifier sidOwner = WindowsIdentity.GetCurrent().Owner;
		if (sidOwner != null)
		{
			accessRule = new PipeAccessRule(sidOwner, PipeAccessRights.FullControl, AccessControlType.Allow);
			pipeSecurity.AddAccessRule(accessRule);
		}

		// Create pipe and start the async connection wait
		_namedPipeServerStream = new NamedPipeServerStream(
			PipeName,
			PipeDirection.In,
			1,
			PipeTransmissionMode.Byte,
			PipeOptions.Asynchronous,
			0,
			0,
			pipeSecurity);

		// Begin async wait for connections
		_namedPipeServerStream.BeginWaitForConnection(NamedPipeServerConnectionCallback, _namedPipeServerStream);
	}

	/// <summary>
	///     The function called when a client connects to the named pipe. Note: This method is called on a non-UI thread.
	/// </summary>
	/// <param name="iAsyncResult"></param>
	private void NamedPipeServerConnectionCallback(IAsyncResult iAsyncResult)
	{
		try
		{
			// End waiting for the connection
			_namedPipeServerStream.EndWaitForConnection(iAsyncResult);

			// Read data and prevent access to _namedPipeXmlPayload during threaded operations
			lock (_namedPiperServerThreadLock)
			{
				// Read data from client
				var xmlSerializer = new XmlSerializer(typeof(NamedPipeXmlPayload));
				_namedPipeXmlPayload = (NamedPipeXmlPayload)xmlSerializer.Deserialize(_namedPipeServerStream);

				// _namedPipeXmlPayload contains the data sent from the other instance
				// As an example output it to the textbox
				// In more complicated cases would need to do some processing here and possibly pass to UI thread
				TextBoxAppend(_namedPipeXmlPayload);
			}
		}
		catch (ObjectDisposedException)
		{
			// EndWaitForConnection will exception when someone closes the pipe before connection made
			// In that case we dont create any more pipes and just return
			// This will happen when app is closing and our pipe is closed/disposed
			return;
		}
		catch (Exception)
		{
			// ignored
		}
		finally
		{
			// Close the original pipe (we will create a new one each time)
			_namedPipeServerStream.Dispose();
		}

		// Create a new pipe for next connection
		NamedPipeServerCreateServer();
	}
	
	/// <summary>
	///     Appends string version of the payload to the end of the text box. Handles being called from a non UI thread.
	/// </summary>
	private void TextBoxAppend(NamedPipeXmlPayload namedPipeXmlPayload)
	{
		if (textBoxOutput.InvokeRequired)
		{
			textBoxOutput.Invoke((MethodInvoker)delegate { TextBoxAppend(namedPipeXmlPayload); });
			return;
		}

		foreach (string commandLine in namedPipeXmlPayload.CommandLineArguments)
		{
			message += "\r\nCommandLine: " + commandLine;
		}

		textBoxOutput.Text += message + "\r\n\r\n";
	}
}

Named Pipe Client

The code for the Named Pipe client operations are much more simple:

  • Check if first instance of application, if not prepare client message ( FormMain_Load() )
  • Add command-line parameters to our custom payload object ( FormMain_Load() )
  • Connect to the Named Pipe and send the payload ( NamedPipeClientSendOptions() )
  • Close application

The code is below:

public partial class FormMain : Form
{
    private const string PipeName = "PIPE_SINGLEINSTANCEANDNAMEDPIPE";
    private NamedPipeServerStream _namedPipeServerStream;
    private NamedPipeXmlPayload _namedPipeXmlPayload;

    private void FormMain_Load(object sender, EventArgs e)
    {
        // If are the first instance then we start the named pipe server listening and allow the form to load
        if (IsApplicationFirstInstance())
        {
            // Create a new pipe - it will return immediately and async wait for connections
            NamedPipeServerCreateServer();
        }
        else
        {
            // We are not the first instance, send the named pipe message with our payload and stop loading
            var namedPipeXmlPayload = new NamedPipeXmlPayload
            {
                CommandLineArguments = Environment.GetCommandLineArgs().ToList()
            };

            // Send the message
            NamedPipeClientSendOptions(namedPipeXmlPayload);

            // Stop loading form and quit
            Close();
        }
    }

    /// <summary>
    ///     Uses a named pipe to send the currently parsed options to an already running instance.
    /// </summary>
    /// <param name="namedPipePayload"></param>
    private void NamedPipeClientSendOptions(NamedPipeXmlPayload namedPipePayload)
    {
        try
        {
            using (var namedPipeClientStream = new NamedPipeClientStream(".", PipeName, PipeDirection.Out))
            {
                namedPipeClientStream.Connect(3000); // Maximum wait 3 seconds

                var xmlSerializer = new XmlSerializer(typeof(NamedPipeXmlPayload));
                xmlSerializer.Serialize(namedPipeClientStream, namedPipePayload);
            }
        }
        catch (Exception)
        {
            // Error connecting or sending
        }
    }
}

Downloads

A fully working example can be downloaded from GitHub at https://github.com/AutoItConsulting/examples-csharp/tree/master/MutexSingleInstanceAndNamedPipe
Download ZIP Icon @2x

On first run the application shows a simple WinForm application with an empty text box. When another attempt is made to run the application the command-line parameters are sent via a Named Pipe to the first instance and shown in the text box.

 

The post Single Instance WinForm App in C# with Mutex and Named Pipes appeared first on AutoIt Consulting.

]]>
Script to Detect if Running from SCCM Task Sequence and Avoid False Positives https://www.autoitconsulting.com/site/deployment/script-detect-running-sccm-task-sequence-avoid-false-positives/ Wed, 20 Jun 2018 13:22:32 +0000 https://www.autoitconsulting.com/site/?p=1989 Overview This post shows how a script can detect if it is running from inside an SCCM Task Sequence in a way that avoids false positives. The Problem At times it’s useful for a script to detect if it is running from an SCCM Task Sequence so it can alter its behaviour accordingly. The most […]

The post Script to Detect if Running from SCCM Task Sequence and Avoid False Positives appeared first on AutoIt Consulting.

]]>
Overview

This post shows how a script can detect if it is running from inside an SCCM Task Sequence in a way that avoids false positives.

The Problem

At times it’s useful for a script to detect if it is running from an SCCM Task Sequence so it can alter its behaviour accordingly. The most commonly used way to do this involves trying to create an instance of the COM object Microsoft.SMS.TSEnviroment. This COM object is temporarily registered during Task Sequence execution, and then it is unregistered on completion. So in a script if you try to create an instance of the COM object and you succeed then it can be assumed that the script is running inside a Task Sequence.

Or so I thought…

I was involved in a simple Package and Program script deployment recently and I noticed that in around 1% of cases my script was failing. I checked the script logs on the affected machines and found that they were crashing because they were detecting that they were in an SCCM Task Sequence and invoking specific logic. But because they actually weren’t in a Task Sequence the script was crashing. I checked the script on the affected machines and saw that the script was successfully creating the Microsoft.SMS.TSEnvironment COM object and therefore assuming a Task Sequence. On further investigation of the machines I found that they all had a left over C:\_SMSTaskSequence folder on the C: drive and that during an OSD build they not finished cleanly (unrelated issue).

During a successful Task Sequence deployment a clean up task is run on completion. The smsts.log file shows certain folders and COM objects being unregistered, the log entries are as follows:

Successfully unregistered Task Sequencing Environment COM Interface.	TSManager	19/06/2018 12:24:52	2428 (0x097C)
Executing command line: "C:\WINDOWS\CCM\TsProgressUI.exe" /Unregister	TSManager	19/06/2018 12:24:52	2428 (0x097C)
==========[ TsProgressUI started in process 1044 ]==========	TsProgressUI	19/06/2018 12:24:52	1040 (0x0410)
Command line: "C:\WINDOWS\CCM\TsProgressUI.exe" /Unregister	TsProgressUI	19/06/2018 12:24:52	1040 (0x0410)
Unregistering COM classes	TsProgressUI	19/06/2018 12:24:52	1040 (0x0410)
Unregistering class objects	TsProgressUI	19/06/2018 12:24:52	1040 (0x0410)
Shutdown complete.	TsProgressUI	19/06/2018 12:24:52	1040 (0x0410)
Process completed with exit code 0	TSManager	19/06/2018 12:24:52	2428 (0x097C)
Successfully unregistered TS Progress UI.	TSManager	19/06/2018 12:24:52	2428 (0x097C)

If a Task Sequence fails to clean up successfully then you can be left with a left over C:\_SMSTaskSequence folder and Microsoft.SMS.TSEnvironment COM object and these can confuse the usual method of detecting a Task Sequence.

The Solution

Although it would be best to avoid machines getting into this state in the first place it can be worked around. The workaround is as follows:

  • Attempt to create the COM object Microsoft.SMS.TSEnvironment
  • If successful, also attempt to read the Task Sequence variable _SMSTSType
  • If _SMSTSType is not blank then we are in a Task Sequence

_SMSTSType is set to 1 or 2 depending on if the current Task Sequence is an OSD or Custom type. If the value is blank then you are not in a Task Sequence.

Script examples are shown below.

VBScript

The code below shows the solution for VBScript:

'
' Function to test if script is running from a Task Sequence. Will return True/False.
'

Function IsInTaskSequence()

	' False by default
	IsInTaskSequence = False

	On Error Resume Next
	Dim oTSEnv : Set oTSEnv = CreateObject("Microsoft.SMS.TSEnvironment")
	If Err.Number = 0 Then 
		' May succeed because of a failed Task Sequence or bad clean-up of COM objects.
		' Double check by reading the Task Sequence type, if its blank - not in a Task Sequence
		If oTSEnv("_SMSTSType") <> "" Then 
			IsInTaskSequence = True
		End If
	End If
	On Error Goto 0

End Function

Cleaning Up Task Sequence Left Overs

If you want to clean-up a machine that has left over Task Sequence COM objects then one method is to advertise a new Task Sequence to it that does nothing except run a dummy command. It should clean itself up on completion.

Alternatively if you want to manually fix up a machine in this state you can unregister the COM components as follows:

regsvr32.exe /s /u C:\Windows\CCM\TSCore.dll
C:\Windows\CCM\TsProgressUI.exe /Unregister

 

The post Script to Detect if Running from SCCM Task Sequence and Avoid False Positives appeared first on AutoIt Consulting.

]]>
GImageX v2.2.0 Released for Windows 10 https://www.autoitconsulting.com/site/image-engineering/gimagex-v2-2-0-released-for-windows-10/ Wed, 13 Jun 2018 15:33:46 +0000 https://www.autoitconsulting.com/site/?p=1952 Overview GImageX v2.2.0 has been released. GImageX is a freeware GUI for working with imaging WIM files providing an alternative to DISM. It can be used to capture, apply, mount, export, split and delete WIM files. GImageX is provided as a 32-bit and 64-bit native application that can be run even in WinPE. GImageX uses the […]

The post GImageX v2.2.0 Released for Windows 10 appeared first on AutoIt Consulting.

]]>
Overview

GImageX v2.2.0 has been released.

GImageX is a freeware GUI for working with imaging WIM files providing an alternative to DISM. It can be used to capture, apply, mount, export, split and delete WIM files. GImageX is provided as a 32-bit and 64-bit native application that can be run even in WinPE. GImageX uses the supported Microsoft WIMGAPI API for working with WIM files.

Since DISM had imaging support added I’ve not really needed a graphical interface for the tool so GImageX has not been updated for a while. However I decided to fix some outstanding bugs, add split WIM support and make sure that it still works properly on the latest versions of Windows 10.
Download @2x

Changes

In summary the changes to GImageX are:

  • Compiled against the latest WIMGAPI libraries supplied in the Windows ADK for Windows 10 1803.
  • Mount and Unmount operations now show progress.
  • Added Split WIM support.
  • Fixed mouse cursor busy / arrow issues.

 

The post GImageX v2.2.0 Released for Windows 10 appeared first on AutoIt Consulting.

]]>
AutoIt Cmdlets for Windows PowerShell https://www.autoitconsulting.com/site/scripting/autoit-cmdlets-for-windows-powershell/ Sun, 12 Jul 2015 17:32:15 +0000 https://www.autoitconsulting.com/site/?p=1426 Overview The newest versions of the AutoIt scripting language now come with a bonus for PowerShell users. A set of native PowerShell Cmdlets! This allows you to add the unique features of AutoIt – window manipulation and keystroke simulation – to your usual PowerShell scripts. As an additional bonus, the AutoIt PowerShell Cmdlets and Assemblies […]

The post AutoIt Cmdlets for Windows PowerShell appeared first on AutoIt Consulting.

]]>
Overview

The newest versions of the AutoIt scripting language now come with a bonus for PowerShell users. A set of native PowerShell Cmdlets! This allows you to add the unique features of AutoIt – window manipulation and keystroke simulation – to your usual PowerShell scripts. As an additional bonus, the AutoIt PowerShell Cmdlets and Assemblies are digitally signed so they can be used with the more strict execution policies. The Cmdlets will also run natively with x86 and x64 versions of PowerShell!

This post will show how to use the AutoIt PowerShell Cmdlets to open notepad and type in some text in the edit control.

Installation

First download the latest version of AutoIt. You can run the full installer, or just download the zip file to get the required bits. Running the installer has the advantage of automatically registering the AutoIt Cmdlets. The files you need are as follows (get them from the zip file or the Program Files folder after installation):

  • AutoItX.psd1
  • AutoItX3.PowerShell.dll
  • AutoItX3.Assembly.dll
  • AutoItX3.dll
  • AutoItX3_x64.dll

Usage

To use the Cmdlets open a PowerShell cmd prompt and enter:

Import-Module .\AutoItX.psd1

Now you can get a list of available AutoIt Cmdlets by doing Get-Command *AU3*:

PS> Get-Command *AU3*

Name                              Category  Module 
----                              --------  ------ 
Invoke-AU3MouseWheel              Cmdlet    AutoItX
Move-AU3Mouse                     Cmdlet    AutoItX
Invoke-AU3MouseClickDrag          Cmdlet    AutoItX
Get-AU3MouseCursor                Cmdlet    AutoItX
Invoke-AU3MouseUp                 Cmdlet    AutoItX
Assert-AU3WinActive               Cmdlet    AutoItX
Assert-AU3WinExists               Cmdlet    AutoItX
Assert-AU3IsAdmin                 Cmdlet    AutoItX
Invoke-AU3Shutdown                Cmdlet    AutoItX
Send-AU3ControlKey                Cmdlet    AutoItX
Invoke-AU3MouseDown               Cmdlet    AutoItX
Invoke-AU3MouseClick              Cmdlet    AutoItX
Invoke-AU3ControlTreeView         Cmdlet    AutoItX
Invoke-AU3ControlListView         Cmdlet    AutoItX
Invoke-AU3ControlCommand          Cmdlet    AutoItX
Invoke-AU3ControlClick            Cmdlet    AutoItX
Move-AU3Control                   Cmdlet    AutoItX
Set-AU3ControlText                Cmdlet    AutoItX
Show-AU3Control                   Cmdlet    AutoItX
Hide-AU3Control                   Cmdlet    AutoItX
Get-AU3ControlText                Cmdlet    AutoItX
Get-AU3ControlFocus               Cmdlet    AutoItX
Set-AU3ControlFocus               Cmdlet    AutoItX
Disable-AU3Control                Cmdlet    AutoItX
Enable-AU3Control                 Cmdlet    AutoItX
Get-AU3StatusbarText              Cmdlet    AutoItX
Invoke-AU3RunAsWait               Cmdlet    AutoItX
Invoke-AU3RunAs                   Cmdlet    AutoItX
Invoke-AU3RunWait                 Cmdlet    AutoItX
Invoke-AU3Run                     Cmdlet    AutoItX
Set-AU3Clip                       Cmdlet    AutoItX
Get-AU3Clip                       Cmdlet    AutoItX
Set-AU3WinTrans                   Cmdlet    AutoItX
Set-AU3WinTitle                   Cmdlet    AutoItX
Set-AU3WinState                   Cmdlet    AutoItX
Set-AU3WinOnTop                   Cmdlet    AutoItX
Move-AU3Win                       Cmdlet    AutoItX
Show-AU3WinMinimizeAllUndo        Cmdlet    AutoItX
Show-AU3WinMinimizeAll            Cmdlet    AutoItX
Get-AU3WinState                   Cmdlet    AutoItX
Get-AU3WinProcess                 Cmdlet    AutoItX
Get-AU3WinClassList               Cmdlet    AutoItX
Get-AU3WinCaretPos                Cmdlet    AutoItX
Get-AU3WinClientSize              Cmdlet    AutoItX
Get-AU3ControlPos                 Cmdlet    AutoItX
Get-AU3ControlHandle              Cmdlet    AutoItX
Get-AU3MousePos                   Cmdlet    AutoItX
Get-AU3WinPos                     Cmdlet    AutoItX
Get-AU3WinHandle                  Cmdlet    AutoItX
Get-AU3ErrorCode                  Cmdlet    AutoItX
Initialize-AU3                    Cmdlet    AutoItX
Show-AU3WinActivate               Cmdlet    AutoItX
Close-AU3Win                      Cmdlet    AutoItX
Wait-AU3WinClose                  Cmdlet    AutoItX
Wait-AU3WinNotActive              Cmdlet    AutoItX
Set-AU3Option                     Cmdlet    AutoItX
Send-AU3Key                       Cmdlet    AutoItX
Wait-AU3Win                       Cmdlet    AutoItX
Wait-AU3WinActive                 Cmdlet    AutoItX
Get-AU3WinTitle                   Cmdlet    AutoItX
Get-AU3WinText                    Cmdlet    AutoItX

Example

I’ll show how to use the Cmdlets using a simple example that will open notepad.exe and modify the edit window by setting the text and simulating some keystrokes. First create a blank PowerShell script called notepad_example.ps1 in the same folder as the AutoItX components above and open it for editing.

Now we want to import the PowerShell module which is AutoItX.psd1. Enter the following in the script:

Import-Module .\AutoItX.psd1

We want to run notepad.exe:

Invoke-AU3Run -Program notepad.exe

After notepad opens we want to wait for the notepad “Untitled -Notepad” window to appear. You might need to change the title for non-English versions of Windows:

$notepadTitle = "Untitled - Notepad"
Wait-AU3Win -Title $notepadTitle
$winHandle = Get-AU3WinHandle -Title $notepadTitle

The Get-AU3WinHandle returns a native Win32 handle to the notepad window. We can use this handle in many other AutoIt functions and it uniquely identifies the window which is much more reliable than constantly using a windows title. If you have obtained a window handle using any other Win32 function you can use it with AutoIt.

After obtaining the handle to the notepad window we want to ensure that the window is active and then get a handle to the Edit Control. Using the AU3Info.exe tool that comes with AutoIt we can find that the name of the edit control in notepad is Edit1.

Show-AU3WinActivate -WinHandle $winHandle
$controlHandle = Get-AU3ControlHandle -WinHandle $winhandle -Control "Edit1"

Now that we have a handle to the edit control we can set text in two ways: Directly (Set-AU3Controltext) or with simulated keystrokes (Send-AU3ControlKey):

Set-AU3ControlText -ControlHandle $controlHandle -NewText "Hello! This is being controlled by AutoIt and PowerShell!" -WinHandle $winHandle
Send-AU3ControlKey -ControlHandle $controlHandle -Key "{ENTER}simulate key strokes - line 1" -WinHandle $winHandle

Now let’s see what the entire script looks like:

# Import the AutoIt PowerShell module
Import-Module .\AutoItX.psd1

# Run notepad.exe
Invoke-AU3Run -Program notepad.exe

# Wait for an untitled notepad window and get the handle
$notepadTitle = "Untitled - Notepad"
Wait-AU3Win -Title $notepadTitle
$winHandle = Get-AU3WinHandle -Title $notepadTitle

# Activate the window
Show-AU3WinActivate -WinHandle $winHandle

# Get the handle of the notepad text control for reliable operations
$controlHandle = Get-AU3ControlHandle -WinHandle $winhandle -Control "Edit1"

# Change the edit control
Set-AU3ControlText -ControlHandle $controlHandle -NewText "Hello! This is being controlled by AutoIt and PowerShell!" -WinHandle $winHandle

# Send some keystrokes to the edit control
Send-AU3ControlKey -ControlHandle $controlHandle -Key "{ENTER}simulate key strokes - line 1" -WinHandle $winHandle
Send-AU3ControlKey -ControlHandle $controlHandle -Key "{ENTER}simulate key strokes - line 2" -WinHandle $winHandle
Send-AU3ControlKey -ControlHandle $controlHandle -Key "{ENTER}{ENTER}" -WinHandle $winHandle

This is how the notepad window should look if everything is working correctly:

notepad_powershell

 

The post AutoIt Cmdlets for Windows PowerShell appeared first on AutoIt Consulting.

]]>
WinPE Version List https://www.autoitconsulting.com/site/deployment/winpe-version-list/ Wed, 08 Jul 2015 14:40:21 +0000 https://www.autoitconsulting.com/site/?p=1413 Overview This post contains a WinPE version list including the WinPE version, the Windows version, and the numeric Windows version string that it was built from. It covers all versions of WinPE from the initial Windows XP versions to the latest Windows 10 versions. Windows Preinstallation Environment (Windows PE or WinPE) is a lightweight version […]

The post WinPE Version List appeared first on AutoIt Consulting.

]]>
Overview

This post contains a WinPE version list including the WinPE version, the Windows version, and the numeric Windows version string that it was built from. It covers all versions of WinPE from the initial Windows XP versions to the latest Windows 10 versions.

Windows Preinstallation Environment (Windows PE or WinPE) is a lightweight version of Windows that is commonly used as part of Windows Deployment. Originally it was only available to large OEMs but is now publically available as part of the Windows Deployment kits like the Windows Automated Installation Kit (Windows AIK). The most recent version at the time of writing is available in the Windows Assessment and Deployment Kit (Windows ADK) for Windows 10.

When working in deployment with boot WIM files in MDT and ConfigMgr you will see many boot images on your travels. Usually these tools only show the Windows version string which that version of WinPE was built from. This doesn’t make it immediately apparent which version of Windows it is optimised to deploy or which drivers you are able to inject into that boot image.

WinPE Version List (Updated June 16, 2018)

The table below shows the main versions of WinPE that you will see in the wild, along with their WinPE version, the Windows version name, and the numeric Windows version string that it was built from.

WinPE Windows Windows Version Notes
1.0 Windows XP 5.1.2600.x First version of WinPE.
1.1 Windows XP SP1 5.1.2600.x
1.2 Windows Server 2003 5.2.3790.x
1.5 Windows XP SP2 5.1.2600.x Windows PE 2004.
1.6 Windows Server 2003 SP1 5.2.3790.x Windows PE 2005.
2.0 Windows Vista 6.0.6000.x
2.1 Windows Server 2008 6.0.6001.x
2.2 Windows Server 2008 SP2 6.0.6002.x
3.0 Windows 7 6.1.7600.x Windows AIK 2.0.
3.1 Windows 7 SP1 6.1.7601.x Windows AIK Supplement for Windows 7 SP1.
4.0 Windows 8 6.2.9200.x Windows ADK (Windows Kits 8.0).
5.0 Windows 8.1 6.3.9300.x Windows ADK (Windows Kits 8.1).
5.1 Windows 8.1 Update 1 6.3.9600.x Windows ADK (Windows Kits 8.1 Update).
10 (1507) Windows 10 1507 10.0.10240.16384 Windows ADK (Windows Kits 10.0) 1507
10 (1511) Windows 10 1511 10.0.10586.0 Windows ADK (Windows Kits 10.0) 1511
10 (1607) Windows 10 1607 10.0.14393.0 Windows ADK (Windows Kits 10.0) 1607
10 (1703) Windows 10 1703 10.0.15063.0 Windows ADK (Windows Kits 10.0) 1703
10 (1709) Windows 10 1709 10.0.16299.15 Windows ADK (Windows Kits 10.0) 1709
10 (1803) Windows 10 1803 10.0.17134.1 Windows ADK (Windows Kits 10.0) 1803

 

 

The post WinPE Version List appeared first on AutoIt Consulting.

]]>
Automating Office 365 Click-to-Run First Use Without Group Policy https://www.autoitconsulting.com/site/deployment/automating-office-365-click-run-first-use-without-group-policy/ Wed, 04 Feb 2015 14:30:41 +0000 https://www.autoitconsulting.com/site/?p=1254 Overview This post shows how to automate the numerous “first-run” dialogs that are shown when a user runs an Office 2013 or Office 365 application for the first time. With a standard local installation of Office 2013 this can be done in two ways. Firstly, by using Group Policy (the recommended way). Secondly, by using […]

The post Automating Office 365 Click-to-Run First Use Without Group Policy appeared first on AutoIt Consulting.

]]>
Overview

This post shows how to automate the numerous “first-run” dialogs that are shown when a user runs an Office 2013 or Office 365 application for the first time.

With a standard local installation of Office 2013 this can be done in two ways. Firstly, by using Group Policy (the recommended way). Secondly, by using the Office Customization Tool (OCT) to create an .msp file that can be used during setup to apply custom user settings. However, when using the “Click-to-Run” version of Office 365 there is no way to use the OCT so user settings can only be configured using Group Policy.

In a recent assignment I needed to create an Office 365 package that would install silently on in various scenarios:

  • Domain joined / Group Policy
  • Domain joined / No Group Policy
  • Home machine

Unwanted Screens and Messages

The customer was happy with the “default” settings that Office comes with, but when the user first runs Office there are numerous screens that popup which they wanted to suppress (with the exception of the Office 365 sign-in screen). These are:

“First things first” / License Agreement

The usual annoying license agreement and questions about auto updates and product improvement that no user will care about or fully understand.

Office First Things First

Default File Types

There isn’t a non-IT user alive that has any idea which of these options is correct. I believe this screen only appears on EU machines. Thanks EU!

Office Default File Types

“Welcome to you new Office”

A useless tutorial about OneDrive and forcing the user to set critical options like “ribbon background”.

Office Welcome

Controlling Settings with Registry Keys

To disable all these screens we need to configure the following registry entries:

Registry Key Value
HKCU\Software\Microsoft\Office\15.0\FirstRun\BootedRTM 1 (DWORD)
HKCU\Software\Microsoft\Office\15.0\FirstRun\disablemovie 1 (DWORD)
HKCU\Software\Microsoft\Office\15.0\Common\General\shownfirstrunoptin 1 (DWORD)
HKCU\Software\Microsoft\Office\15.0\Common\General\ShownFileFmtPrompt 1 (DWORD)
HKCU\Software\Microsoft\Office\15.0\Common\PTWatson\PTWOptIn 1 (DWORD)
HKCU\Software\Microsoft\Office\15.0\Common\qmenable 1 (DWORD)

The problem with this is that these keys are all user settings. If you try and set them at the end of your installation they usually won’t work as the installation is likely to be done under an admin account. Unless you set them from a user-side login script or Group Policy then they won’t apply to each user that logs onto the machine.

To get this to work we can use a little known feature of Office which allows you to specify some HKEY_LOCAL_MACHINE keys that are automatically migrated into HKEY_CURRENT_USER when an Office application is first run for that user. There is no worthwhile documentation of this process except on this Deployment Guys blog post.

In summary, you create keys under HKLM in the following locations (depends on the OS and version of Office):

OS Version Office Version Key
32bit 32bit HKLM\SOFTWARE\Microsoft\Office\15.0\User Settings\MyCustomSettings
64bit 32bit HKLM\SOFTWARE\Wow6432Node\Microsoft\Office\15.0\User Settings\MyCustomSettings
64bit 64bit HKLM\SOFTWARE\Microsoft\Office\15.0\User Settings\MyCustomSettings

The MyCustomSettings part of the key can be anything you like. You can even have multiple different names for different groups of settings. Then under this key make another key called Create and under this create the registry settings that you want to set in HKEY_CURRENT_USER. The 15.0 part refers to Office 2013 / Office 365.

When a user runs an Office application it checks to see if it has previously migrated these settings before and if not it creates the relevant keys in HKEY_CURRENT_USER. This is done before any user interface is shown so can successfully be used to set the options that hide the first run dialogs.

This can be a little tricky to understand but if you look at the example script below you should get the idea. Something interesting to note is that this will work for any registry key – it doesn’t have to be Office related!

Example Script to Automate First Run

Here is a small batch script that can be run at the end of the Office installation to configure all keys so that when any new user runs Office for the first time they don’t see all the first run dialogs. For this example I will assume we will be using a 64bit OS with the 32bit version of Office 365 (by far the most common configuration in most Enterprises).

reg add "HKLM\SOFTWARE\Wow6432Node\Microsoft\Office\15.0\User Settings\MyCustomUserSettings" /v Count /t REG_DWORD /d 1 /f >nul
reg add "HKLM\SOFTWARE\Wow6432Node\Microsoft\Office\15.0\User Settings\MyCustomUserSettings\Create\Software\Microsoft\Office\15.0\FirstRun" /v BootedRTM /t REG_DWORD /d 1 /f >nul
reg add "HKLM\SOFTWARE\Wow6432Node\Microsoft\Office\15.0\User Settings\MyCustomUserSettings\Create\Software\Microsoft\Office\15.0\FirstRun" /v disablemovie /t REG_DWORD /d 1 /f >nul
reg add "HKLM\SOFTWARE\Wow6432Node\Microsoft\Office\15.0\User Settings\MyCustomUserSettings\Create\Software\Microsoft\Office\15.0\Common\General" /v shownfirstrunoptin /t REG_DWORD /d 1 /f >nul
reg add "HKLM\SOFTWARE\Wow6432Node\Microsoft\Office\15.0\User Settings\MyCustomUserSettings\Create\Software\Microsoft\Office\15.0\Common\General" /v ShownFileFmtPrompt /t REG_DWORD /d 1 /f >nul
reg add "HKLM\SOFTWARE\Wow6432Node\Microsoft\Office\15.0\User Settings\MyCustomUserSettings\Create\Software\Microsoft\Office\15.0\Common\PTWatson" /v PTWOptIn /t REG_DWORD /d 1 /f >nul
reg add "HKLM\SOFTWARE\Wow6432Node\Microsoft\Office\15.0\User Settings\MyCustomUserSettings\Create\Software\Microsoft\Office\15.0\Common" /v qmenable /t REG_DWORD /d 1 /f >nul

 

 

The post Automating Office 365 Click-to-Run First Use Without Group Policy appeared first on AutoIt Consulting.

]]>
Get the Current Script Directory in PowerShell, VBScript and Batch https://www.autoitconsulting.com/site/scripting/get-current-script-directory-powershell-vbscript-batch/ Thu, 23 Oct 2014 13:30:09 +0000 https://www.autoitconsulting.com/site/?p=1230 Overview This post shows how to quickly get the current script directory using PowerShell, VBScript and Batch – the most commonly used scripting languages for Windows. The scripts I write usually read in other files or call other scripts. In order for these scripts to run from any location – such as a UNC path […]

The post Get the Current Script Directory in PowerShell, VBScript and Batch appeared first on AutoIt Consulting.

]]>
Overview

This post shows how to quickly get the current script directory using PowerShell, VBScript and Batch – the most commonly used scripting languages for Windows.

The scripts I write usually read in other files or call other scripts. In order for these scripts to run from any location – such as a UNC path – without hard coding paths they need to use relative paths. Depending on how the script is called the working directory may not be the same as the script file. For example, if your current directory is C:\Windows and you run the script \\server\share\somepath\myscript.cmd then any relative paths in the script file won’t work correctly.

One way around this is to make the script change working directory right at the start and then use relative paths after that. But in some situations – such as batch files on UNC paths – this won’t aways work. The best way to get around this is to determine the directory that the script resides in at the start of the script and then make all other paths reference that.

Because I jump around various scripting languages all the time, I tend to forget the best way to do this and have to hunt for examples in old scripts. As a reference for myself this post gives the template for getting the current script directory in the languages I tend to use: PowerShell, VBScript and batch.

Windows Batch

Windows batch is the trickiest in some ways – it also is the one that cannot support UNC working directories. There is a built-in variable %~dp0 which expands to the path that the script is located in including the trailing slash. This can make for messy looking scripts because to run setup.exe from the current script directory you would use %~dp0setup.exe. This works great but can be a little confusing for others to read because it looks like a typo.

My preferred method is to create a new variable at the top of the script using %~dp0 and then stripping the trailing backslash. Here is the script:

@ECHO OFF
REM Determine script location for Windows Batch File

REM Get current folder with no trailing slash
SET ScriptDir=%~dp0
SET ScriptDir=%ScriptDir:~0,-1%

ECHO Current script directory is %ScriptDir%

VBScript

VBScript is fairly straightforward the full path of the running script is available in WScript.ScriptFullName and then you can use the FileSystemObject class to get the parent folder name. Here is the script:

' Determine script location for VBScript
Dim oFSO : Set oFSO = CreateObject("Scripting.FileSystemObject")
Dim sScriptDir : sScriptDir = oFSO.GetParentFolderName(WScript.ScriptFullName)

Wscript.Echo "Current script directory is " & sScriptDir

PowerShell

PowerShell users have long used a snippet from this post that gets the script folder. However, it doesn’t work as expected depending on how the script was loaded. This altered version should work in all cases:

# Determine script location for PowerShell
$ScriptDir = Split-Path $script:MyInvocation.MyCommand.Path

Write-Host "Current script directory is $ScriptDir"

 

Hopefully this post will be a useful reference to you when trying to remember how to get the current script directory. I know that I’ll end up referencing it myself in the future!

 

The post Get the Current Script Directory in PowerShell, VBScript and Batch appeared first on AutoIt Consulting.

]]>
UTF-8 and UTF-16 Text Encoding Detection Library https://www.autoitconsulting.com/site/development/utf-8-utf-16-text-encoding-detection-library/ Sat, 23 Aug 2014 17:02:22 +0000 https://www.autoitconsulting.com/site/?p=1180 Overview This post shows how to detect UTF-8 and UTF-16 text and presents a fully functional C++ and C# library that can be used to help with the detection. I recently had to upgrade the text file handling feature of AutoIt to better handle text files where no byte order mark (BOM) was present. The […]

The post UTF-8 and UTF-16 Text Encoding Detection Library appeared first on AutoIt Consulting.

]]>
Overview

This post shows how to detect UTF-8 and UTF-16 text and presents a fully functional C++ and C# library that can be used to help with the detection.

I recently had to upgrade the text file handling feature of AutoIt to better handle text files where no byte order mark (BOM) was present. The older version of code I was using worked fine for UTF-8 files (with or without BOM) but it wasn’t able to detect UTF-16 files without a BOM. I tried to the the IsTextUnicode Win32 API function but this seemed extremely unreliable and wouldn’t detect UTF-16 Big-Endian text in my tests.

Note, especially for UTF-16 detection, there is always an element of ambiguity. This post by Raymond shows that however you try and detect encoding there will always be some sequence of bytes that will make your guesses look stupid.

Here are the detection methods I’m currently using for the various types of text file. The order of the checks I perform are:

  • BOM
  • UTF-8
  • UTF-16 (newline)
  • UTF-16 (null distribution)

Downloads

The C# and C++ library can be downloaded from GitHub here: https://github.com/AutoItConsulting/text-encoding-detect
download_zip_106x51@2x

BOM Detection

I assume that if I find a BOM at the start of the file that it is valid. Although it’s possible that the BOM could just be ANSI text, it’s highly unlikely. The BOMs are as follows:

Encoding BOM
UTF-8 0xEF, 0xBB, 0xBF
UTF-16 Little Endian 0xFF, 0xFE
UTF-16 Big Endian 0xFE, 0xFF

UTF-8 Detection

UTF-8 checking is reliable with a very low chance of false positives, so this is done first. If the text is valid UTF-8 but all the characters are in the range 0-127 then this is essentially ASCII text and can be treated as such – in this case I don’t continue to check for UTF-16.

If a character is in the range of 0-127 then it is a single character and nothing more needs to be done. Values above 127 indicate multibyte encoding using the next 1, 2 or 3 bytes.

First byte Number of bytes in sequence
 0-127 1 byte
 194-223 2 bytes
 224-239 3 bytes
 240-244 4 bytes

These additional bytes are in the range 128-191. This scheme means that if we decode the text stream based on this method and no unexpected sequences occur then this is almost certainly UTF-8 text.

UTF-16 Detection

UTF-16 text is generally made up of 2-byte sequences (technically, there can be a 4-byte sequence with surrogate pairs). Depending on the endianness of the file the unicode character 0x1234 could be represented in the character stream as “0x12 0x34” or “0x34 0x12”.  The BOM is usually used to easily determine if the file is in big or little endian mode. Without a BOM this is a little more tricky to determine.

I use two methods to try and determine if the text is UTF-16 and the endianness. The first is the newline characters 0x0a and 0x0d. Depending on the endianness they will be sequenced as “0x0a 0x00” or “0x00 0x0a”. If every instance of these characters in a text file is encoded the same way then that is a good sign that the text is UTF-16 and if it is big or little endian. The drawback of this method is that it won’t work for very small amounts of text, or files that don’t contain newlines.

The second method relies on the fact that many files may contain large amounts of pure ASCII text in the range 0-127. This applies especially to files generally used in IT like scripts and logs. When encoded in UTF-16 these are represented as the ASCII character and a null character. For example, space, 0x20 would be encoded as “0x00 0x20” or “0x20 0x00”. Depending on the endianness this will result in a large amount of nulls in the odd or even byte positions. We just need to scan the file for these odd and even nulls and if there is a significant percentage in the expected position then we can assume the text is UTF-16.

The Library

The C# and C++ library can be downloaded from GitHub here: https://github.com/AutoItConsulting/text-encoding-detect
download_zip_106x51@2x

Using C# as the example, the two main public functions are:

public Encoding CheckBOM(byte[] buffer, int size)
public Encoding DetectEncoding(byte[] buffer, int size)

These functions return the Encoding which is the following enum:

public enum Encoding
{
    None,               // Unknown or binary
    ANSI,               // 0-255
    ASCII,              // 0-127
    UTF8_BOM,           // UTF8 with BOM
    UTF8_NOBOM,         // UTF8 without BOM
    UTF16_LE_BOM,       // UTF16 LE with BOM
    UTF16_LE_NOBOM,     // UTF16 LE without BOM
    UTF16_BE_BOM,       // UTF16-BE with BOM
    UTF16_BE_NOBOM      // UTF16-BE without BOM
}

The DetectEncoding function takes a byte buffer and a size parameter. The larger the buffer that is used, the more accurate the result will be. I’d recommend at least 4KB.

Here is an example of passing a buffer to the DetectEncoding function:

// Detect encoding
var textDetect = new TextEncodingDetect();
TextEncodingDetect.Encoding encoding = textDetect.DetectEncoding(buffer, buffer.Length);

Console.Write("Encoding: ");
if (encoding == TextEncodingDetect.Encoding.None)
{
    Console.WriteLine("Binary");
}
else if (encoding == TextEncodingDetect.Encoding.ASCII)
{
    Console.WriteLine("ASCII (chars in the 0-127 range)");
}
else if (encoding == TextEncodingDetect.Encoding.ANSI)
{
    Console.WriteLine("ANSI (chars in the range 0-255 range)");
}
else if (encoding == TextEncodingDetect.Encoding.UTF8_BOM || encoding == TextEncodingDetect.Encoding.UTF8_NOBOM)
{
    Console.WriteLine("UTF-8");
}
else if (encoding == TextEncodingDetect.Encoding.UTF16_LE_BOM || encoding == TextEncodingDetect.Encoding.UTF16_LE_NOBOM)
{
    Console.WriteLine("UTF-16 Little Endian");
}
else if (encoding == TextEncodingDetect.Encoding.UTF16_BE_BOM || encoding == TextEncodingDetect.Encoding.UTF16_BE_NOBOM)
{
    Console.WriteLine("UTF-16 Big Endian");
}

Null and Binary Handling

One quirk of the library is how I chose to handle nulls (0x00). These are technically valid in UTF-8 sequences, but I’ve assumed that any file that contains a null is not ANSI/ASCII/UTF-8. Allowing nulls for UTF-8 can cause a false return where UTF-16 text containing just ASCII can appear to be valid UTF-8. To disable this behaviour just set the NullSuggestsBinary property on the library to false before calling DetectEncoding. In practice, most text files don’t contain nulls and the defaults are valid.

 

The post UTF-8 and UTF-16 Text Encoding Detection Library appeared first on AutoIt Consulting.

]]>
Mass Redistribute Packages in ConfigMgr 2012 https://www.autoitconsulting.com/site/deployment/mass-redistribute-packages-configmgr-2012/ Fri, 10 Jan 2014 17:30:26 +0000 http://www.autoitconsulting.com/site/?p=1105 Overview Recently I came across an issue with System Center Configuration Manager (ConfigMgr) 2012 where a large number of packages and applications had failed to be distributed to a new Distribution Point (DP). There were many thousands of packages in the environment and a few hundred were showing in the error state in the console. […]

The post Mass Redistribute Packages in ConfigMgr 2012 appeared first on AutoIt Consulting.

]]>
Overview

Recently I came across an issue with System Center Configuration Manager (ConfigMgr) 2012 where a large number of packages and applications had failed to be distributed to a new Distribution Point (DP). There were many thousands of packages in the environment and a few hundred were showing in the error state in the console. Unfortunately, once packages get into this state then they don’t fix themselves as it appears that the ConfigMgr validation schedule only reports that there was a problem distributing – it doesn’t actually trigger a redistribution.

To fix the problem you have to identify each package that has a problem and go into its properties. Then you need to select the DPs you want to redistribute to and click Redistribute.

CM2012 Console Redistribute

You have to to do this for each package that has a problem taking care to only redistribute to the DPs that have have failures. In an environment with a few hundred problem packages with thousands of DPs this is completely impractical. What we need is a script that can do the following:

  • Identify packages that were unsuccessfully distributed.
  • Mass redistribute packages that are in the error state.
  • Only redistribute packages to DPs that need it to avoid needless processing and network traffic.
  • Optionally, limit the process to a specific DP. For example, a newly built DP was incorrectly configured, has subsequently been fixed but all the packages assigned to it have entered an error state.

Redistribute Script

The VBScript is listed below and can also be downloaded here: CM2012_DP_Redist.zip

' ****************************************************************************
'
' Copyright (c) 2013, Jonathan Bennett / AutoIt Consulting Ltd
' All rights reserved.
' http://www.autoitconsulting.com
'
' THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
' ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
' WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
' DISCLAIMED. IN NO EVENT SHALL AUTOIT CONSULTING LTD BE LIABLE FOR ANY
' DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
' (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
' LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
' ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
' (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
' SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
'
' ****************************************************************************
'
' Purpose:   Checks all packages assigned to a DP and redistributes any with errors.
' 
' Usage:     cscript.exe CM2012_DP_Redist.vbs
' 
' Version:   1.0.0
' 
' History:
' 1.0.0	20-Nov-2013 - Jonathan Bennett 
'       First version.
'
' ****************************************************************************

' ****************************************************************************
' Global constant and variable declarations
' ****************************************************************************

Option Explicit

' CHANGEABLE VARIABLES

' The name of the CAS/Primary site server
Public Const CASServerName = "CASSERVERNAME"

' Which DP to refresh packages for - leave this blank to check ALL DPs
Public Const DPServerName = "DPSERVERNAME"

' END OF CHANGABLE VARIABLES


Public Const wbemFlagReturnImmediately = 16
Public Const wbemFlagForwardOnly = 32

Dim oSWbemServices

' ****************************************************************************
' End declarations
' ****************************************************************************

' ****************************************************************************
' Main routine
' ****************************************************************************

' Connect to CM 2012 (CAS)
If ConnectServer(CASServerName) = False Then
	WScript.Echo "Unable to connect to server."
	WScript.Quit 1
End If


' Find all packages with a bad status
Dim queryString
Dim dpStatuses, dpStatus, serverName, packageID, packageDPs, packageDP, nalPath

If DPServerName = "" Then
	queryString = "SELECT Name,PackageID,MessageState FROM SMS_DistributionDPStatus WHERE MessageState > 2"
Else
	queryString = "SELECT Name,PackageID,MessageState FROM SMS_DistributionDPStatus WHERE Name LIKE '%" & DPServerName & "%' AND MessageState > 2"
End If

Set dpStatuses = oSWbemServices.ExecQuery(queryString,, wbemFlagForwardOnly+wbemFlagReturnImmediately)
For Each dpStatus in dpStatuses
	serverName = UCase(dpStatus.Name)
	packageID = dpStatus.PackageID

	queryString = "SELECT * FROM SMS_DistributionPoint WHERE PackageID = '" & packageID & "'"
	Set packageDPs = oSWbemServices.ExecQuery(queryString,, wbemFlagForwardOnly+wbemFlagReturnImmediately)
	For Each packageDP in packageDPs
		nalPath = UCase(packageDP.ServerNalPath)

		If InStr(1, nalPath, serverName) > 0 Then
			WScript.Echo "Redistributing " & packageID & " on " & serverName & "..."
			packageDP.RefreshNow = True
			On Error Resume Next
			packageDP.Put_
			On Error Goto 0
		End If

	Next
Next


WScript.Quit 0


' ****************************************************************************
' Functions
' ****************************************************************************

Function ConnectServer(ByVal serverName)
	If serverName = "" Then serverName = "."
	
	Dim oSWbemLocator : Set oSWbemLocator = CreateObject("WbemScripting.SWbemLocator")
	
	On Error Resume Next
	Set oSWbemServices = oSWbemLocator.ConnectServer(serverName, "root\sms")
	If Err.Number <> 0 Then
		ConnectServer = False
		Exit Function
	End If
	On Error Goto 0
	
	Dim ProviderLoc : Set ProviderLoc = oSWbemServices.ExecQuery("Select * FROM SMS_ProviderLocation WHERE ProviderForLocalSite='True'")

	Dim Location
	For Each Location In ProviderLoc
		Set oSWbemServices = oSWbemLocator.ConnectServer(Location.Machine, "root\sms\site_" + Location.SiteCode)
		Exit For
	Next
	ConnectServer = True
End Function

At the top of the script you will want to change the CASServerName and DPServerName values to match your environment. If DPServerName is left blank then all DPs will be in scope, if a server is specified then redistributions are limited to just that DP. For example, my CAS is called server04 and I want to trigger packages on all DPs so I would use:

Public Const CASServerName = "server04"
Public Const DPServerName = ""

How does the script work?

Well you need to understand that there are two main WMI classes that we need to work with:

SMS_DistributionDPStatus

This class details the status of a package on a DP. The main properties we are interested in are Name (the DP server name), PackageID and MessageState. The MSDN article for this class defines a MessageState of 3 to indicate an error. While testing the results where that this is not correct and the correct value is 4. Just in case, our script will assume anything greater than 2 to indicate an error.

SMS_DistributionPoint 

This class represents an instance of a package assigned to a DP. It provides the RefreshNow method which triggers the redistribution of the specific package on the specific DP.

It might be surprising to learn that there is an instance of each of these classes for each package for each DP. So if you have 100 packages and 50 DPs then there are 5000 SMS_DistributionDPStatus and 5000 SMS_DistributionPoint objects. This is useful to understand if you are using a WMI browser when troubleshooting.

The script follows this process:

  1. Gets a collection of all SMS_DistributionDPStatus objects where the MessageState is greater than 2.
  2. Extracts the PackageID and DP server name for each of these objects.
  3. Gets a collection of all SMS_DistributionPoint objects where the PackageID and DP match from our SMS_DistributionDPStatus collection.
  4. Triggers the SMS_DistributionPoint.RefreshNow method for each member of the collection.

After running the script any packages that have failed will be redistributed and the progress can be seen in the normal way in the ConfigMgr console.

As the script only redistributes failed packages it is safe to rerun as often as required.

 

The post Mass Redistribute Packages in ConfigMgr 2012 appeared first on AutoIt Consulting.

]]>
Detect an SSD Disk Using a Script https://www.autoitconsulting.com/site/scripting/detect-an-ssd-disk-using-a-script/ https://www.autoitconsulting.com/site/scripting/detect-an-ssd-disk-using-a-script/#comments Tue, 07 Jan 2014 10:58:29 +0000 http://www.autoitconsulting.com/site/?p=1108 Overview A common request when creating automated desktop builds or custom maintenance tools is the ability detect if the main system drive is an SSD disk or not. This information can be used to configure a system in a particular way. For example, to run (or not run) certain maintenance tasks or to optimally configure […]

The post Detect an SSD Disk Using a Script appeared first on AutoIt Consulting.

]]>
Overview

A common request when creating automated desktop builds or custom maintenance tools is the ability detect if the main system drive is an SSD disk or not. This information can be used to configure a system in a particular way. For example, to run (or not run) certain maintenance tasks or to optimally configure the system settings or services to take advantage of the way SSD disks perform.

Recent versions of Windows, like Windows 8, automatically optimise certain tasks if an SSD is detected, but earlier versions of Windows commonly used in corporate environments lack this functionality so it must be done manually. This kind of process is prone to errors as it relies on having the engineer decide if the disk is an SSD as some sort of build parameter. In other cases I’ve seen people maintain lists of drive vendors and models that are entered into .txt files and used by scripts and WMI calls to look for a match.

I recently released an updated version of the AutoIt scripting language and as part of the update I wanted to update the DriveGetType function to provide a solution to this problem. I added the ability to detect the bus and SSD status of a drive. The basis of the method I used is that described in this TechNet blog post about how the Windows defragmenter determines SSD drives. The main tests it performs are:

  • Disks whose driver reports “no seek penalty”.
  • Disks that report a nominal media rotation rate of 1.

Mostly all modern SSD drives are populating these values so it’s a better way of detection than checking WMI for vendor strings.

Example Script

Here is a basic AutoIt script that will output a simple onscreen message showing the type, bus and SSD status of a given drive letter.  Remember to download AutoIt to run it.

; Some constants
Const $DT_DRIVETYPE = 1
Const $DT_SSDSTATUS = 2
Const $DT_BUSTYPE = 3

; Drive letter to check
Const $DriveLetter = "C:"

; Get drive type and verify it exists
$type = DriveGetType($DriveLetter, $DT_DRIVETYPE)
If @error Then
	MsgBox(4096, "Error", "Invalid drive requested")
	Exit
EndIf

; Get SSD status (blank return is non-SSD)
$ssd = DriveGetType($DriveLetter, $DT_SSDSTATUS)
If $ssd = "" Then $ssd = "Non SSD"

; Get Bus type
$bus = DriveGetType($DriveLetter, $DT_BUSTYPE)

; Create output message
$output = "Type: " & $type & @CRLF
$output &= "SSD: " & $ssd & @CRLF
$output &= "Bus: " & $bus
MsgBox(4096, "Drive Info", $output)

The above script will just output a message with some info – which is great for testing the functionality. However, here is how you take take that script and turn it into something useful in a scripting environment or from a ConfigMgr or MDT Task Sequence.

Using AutoIt compile the following script as IsSSD.exe.

; Some constants
Const $DT_DRIVETYPE = 1
Const $DT_SSDSTATUS = 2
Const $DT_BUSTYPE = 3

; Check command line params
If $CmdLine[0] <> 1 Then
	MsgBox(4096, "Usage", "Usage:" & @CRLF & "IsSSD.exe ")
	Exit
EndIf

; Drive letter to check is the first parameter
$DriveLetter = $CmdLine[1]

; Get SSD info
$ssd = DriveGetType($DriveLetter, $DT_SSDSTATUS)

; Return 1 if it is an SSD, otherwise 0
If $ssd = "SSD" Then
	Exit 1
Else
	Exit 0
EndIf

When compiled, the above script can be run along with the required drive letter as follows:

IsSSD.exe C:

It will return 1 if the specified drive was SSD, or 0 if it is not SSD or it cannot be determined. This return value can be accessed using the standard ERRORLEVEL variable from a batch file or used by a Task Sequence in ConfigMgr or MDT.

If you don’t want to compile the script, you could run it directly with the AutoIt3.exe interpreter as well:

AutoIt3.exe IsSSD.au3 C:

Download

I’ve created a .zip file with these scripts and the final IsSSD.exe file which can be downloaded here:

Download IsSSD

 

 

The post Detect an SSD Disk Using a Script appeared first on AutoIt Consulting.

]]>
https://www.autoitconsulting.com/site/scripting/detect-an-ssd-disk-using-a-script/feed/ 2
GImageX v2.1.0 Released for Windows 8.1 https://www.autoitconsulting.com/site/image-engineering/gimagex-v2-1-0-released-windows-8-1/ Thu, 21 Nov 2013 12:14:23 +0000 http://www.autoitconsulting.com/site/?p=1086 GImageX v2.1.0 has been released. GImageX is a graphical user interface for the ImageX tool from the Windows Assessment and Deployment Kit (Windows ADK). ImageX is used to capture and apply WIM images for Windows deployments. GImageX uses the supported Microsoft WIMGAPI API for working with WIM files. Previous versions of GImageX were compiled for the […]

The post GImageX v2.1.0 Released for Windows 8.1 appeared first on AutoIt Consulting.

]]>
GImageX v2.1.0 has been released.

GImageX is a graphical user interface for the ImageX tool from the Windows Assessment and Deployment Kit (Windows ADK). ImageX is used to capture and apply WIM images for Windows deployments. GImageX uses the supported Microsoft WIMGAPI API for working with WIM files.

Previous versions of GImageX were compiled for the Windows Automated Installation Kit (WAIK). The old version actually worked fine for Windows 8 but I’ve updated it to make sure it works correctly with the latest Windows ADK which was released with Windows 8.1 (this also works with older operating systems like Windows XP, Vista, 7, Server 2008, etc.)

GImageX allows you do perform the most common WIM operations from an easy to use interface. Including:

  • Image capture
  • Image deployment
  • Retrieve image information and XML information
  • Mount an image
  • Export images

To install GImageX you need a few files from the Windows ADK (wimgapi.dll, etc.). The Windows ADK is a large download by default, but you can get the required files by only choosing the Deployment Tools option during setup. Instructions for which files you need to extract are given in the GImageX help file.

Download GImageX here.

In summary the changes to GImageX are:

  • Compiled against the latest WIMGAPI libraries supplied in the Windows ADK for Windows 8.1.
  • Added missing SKU flags (Professional, ServerStandardCore, Embedded, etc.) – although these can actually left blank so that GImageX automatically determines them (even the old version of GImageX correctly did this for new operating systems).

GImageX works on Windows XP and later and also works in WinPE.

 

The post GImageX v2.1.0 Released for Windows 8.1 appeared first on AutoIt Consulting.

]]>
Configuring an AirPort Extreme for NAT Only Mode https://www.autoitconsulting.com/site/networking/configuring-an-airport-extreme-for-nat-only-mode/ https://www.autoitconsulting.com/site/networking/configuring-an-airport-extreme-for-nat-only-mode/#comments Fri, 08 Feb 2013 15:28:45 +0000 http://www.autoitconsulting.com/site/?p=1025 Overview This post will show how you can use an configure an AirPort Extreme for NAT only mode so that you can allow an additional DHCP server on your network to handle IP address allocation. The instructions are for the AirPort Extreme in the Time Capsule, but I believe this should be the same for […]

The post Configuring an AirPort Extreme for NAT Only Mode appeared first on AutoIt Consulting.

]]>
Overview

This post will show how you can use an configure an AirPort Extreme for NAT only mode so that you can allow an additional DHCP server on your network to handle IP address allocation. The instructions are for the AirPort Extreme in the Time Capsule, but I believe this should be the same for a standard AirPort Extreme as well.

I recently got an Apple Time Capsule to replace my old linksys cable router. It’s a great little unit but one thing was causing me an issue with my home setup and I couldn’t initially get it to play nice with my home network.

Due to the nature of my work I have quite a few machines on my home network. These are using for testing out various bits of Microsoft software such as System Center Configuration Manager and MDT. This involves a number of virtual and physical hosts, a full Microsoft Active Directory, DHCP and DNS setup. I also have a number of virtual client machines which are constantly rebuilt for testing. The way I have it setup is that the Microsoft DHCP server is responsible for allocating IP addresses and it causes the clients to use the Microsoft DNS (along with the Dynamic DNS registrations) along with some other specific DHCP scope options. The Microsoft DNS is setup to forward external DNS requests to the router. This ensures that all the Microsoft clients can correctly register themselves in the Microsoft DNS but can still access the internet directly. All my other non-Microsoft devices (laptop, iPad, TV, etc.) can work normally as well.

The Problem

For this setup to work, all I do is to turn off the DHCP server on the router so that the Microsoft DHCP server can take over. This is where the problems started because you don’t have that option in the interface for the Router Mode. You only get these options:

  • DHCP and NAT – This is the default mode and it runs a DHCP server and lets clients access the internet.
  • DHCP Only – This runs a DHCP server but doesn’t function as a router.
  • Off (Bridge Mode) – This is just used for acting as a wifi extender.

None of these modes work for me. What I actually need is an AirPort Extreme “NAT Only” router mode that doesn’t exist.

If you have more than one DHCP server on a LAN then both will try and hand out IP addresses to clients, but the client will register with the first server that responds. My solution was to configure the Time Capsule so that it was running in the DHCP and NAT mode so that it could be correctly used as an internet gateway, but I would configure it so that it had no free IP addresses to hand out. This would mean that any clients would only be able to successfully request an IP address from my Microsoft DHCP server.

The Solution

My solution is:

  • Set the Router Mode to “DHCP and NAT”.
  • Create the smallest possible DHCP range (2 IP addresses in the AirPort software).
  • Create “dummy” reservations for the DHCP range so that the addresses can’t actually be used.

Here is how I configured it. I’ll be using the IP range of 192.168.0.x

Open the AirPort utility and go to the Network tab. Set the Router Mode to “DHCP and NAT” as shown in the screenshot above.

Click the Network Options… button and setup the DHCP for the 192.168 network and the range will be from 253 to 254 then click Save.

This will mean that the AirPort Extreme will have the address 192.168.0.1 and it will hand out the 192.168.0.253 and 192.168.0.254 addresses to clients. But we don’t want to the hand out any addresses! We get around this be creating a couple of dummy reservations. From the Network tab and in the DHCP Reservations section, click the + symbol.

Now enter a new reservation with the name DummyReservation1 and a MAC address of 00:00:00:00:00:00 and click Save.

Add a second reservation with the name DummyReservation2 and a MAC address of 00:00:00:00:00:01 and click Save. (Note: the two reservations must have different MAC addresses or they will vanish when you save the configuration).

The DHCP Reservation list should now look like this:

Finally click Update to store and activate the new configuration. Remember that your other DHCP server is now in charge of handing out IP addresses in that range – in this case that is 192.168.0.2 to 192.168.0.252.

 

The post Configuring an AirPort Extreme for NAT Only Mode appeared first on AutoIt Consulting.

]]>
https://www.autoitconsulting.com/site/networking/configuring-an-airport-extreme-for-nat-only-mode/feed/ 3
Website Server Rebuild Diary https://www.autoitconsulting.com/site/hosting/website-server-rebuild-diary/ Mon, 19 Nov 2012 14:57:25 +0000 http://www.autoitconsulting.com/site/?p=954 Overview Not a usual scripting or deployment post this time – this is a website server rebuild diary. I wanted to make a quick post to document a server rebuild and to log the sort of work that goes into maintaining this and the www.autoitscript.com site. Even small (ish) sites can be quite a bit […]

The post Website Server Rebuild Diary appeared first on AutoIt Consulting.

]]>
Overview

Not a usual scripting or deployment post this time – this is a website server rebuild diary. I wanted to make a quick post to document a server rebuild and to log the sort of work that goes into maintaining this and the www.autoitscript.com site. Even small (ish) sites can be quite a bit of effort.

Current environment is CentOS 5.8 (upgraded many times from earlier versions of 5.x) running on a dedicated server (Quad core, 8GB RAM, mirrored disks).

CentOS packages aren’t upgraded throughout the lifetime of a major release (all 5.x versions) aside from security fixes. This means that the OS is very stable and great for hosting but it can require some custom packages of newer software to cater for the websites that I want to run. Significant packages include:

  • Apache 2.2.22 (Jason Litka Repo)
  • MySQL 5.1.48 (Jason Litka Repo)
  • PHP 5.2.17 (Jason Litka Repo)
  • Subversion 1.7 (self build)
  • Python 2.4

As you can see, the only “standard” CentOS package is Python 2.4 as many of the CentOS internals rely on it and upgrading it can cause major problems.

The main web applications we use are:

We currently handle around 40GB of traffic per day and the forum frequently has close to 1000 active users at once. I have recently started to use Amazon CloudFront to serve images so that the site is quite fast no matter where it is accessed from.

Tuesday 13th November, 2012

Attempted to upgrade MediaWiki to version 1.20 and found that it now has a minimum requirement of  PHP 5.3. I found that CentOS had unusually released a special set of PHP 5.3 RPMs so I installed those. PHP immediately stopped working and this was traced to my APC (PHP accelerator) extension which needs to be compiled against the correct version of PHP. I reinstalled APC using “pecl” and everything worked again. I was pretty pleased that PHP 5.3 was running as there are more and more applications (particularly WordPress extensions) which require 5.3 that I could now use.

I looked to upgrade Trac to v1.0 which has always been fairly straightforward, but found that it now had Python 2.5 as a minimum requirement. I could attempt to build a custom version of Python just for use in Trac, but I decided that as my hosting provider could supply a CentOS 6 image that I would use the opportunity to rebuild the server entirely (with so many upgrades and custom packages it would be nice to get back to a fresh install).

Unfortunately, the ISP imaging process will wipe the disk entirely, making the upgrade “fun”…

Wednesday 14th November, 2012

I spent a couple of hours double checking my backup processes. These are performed daily using a combination of:

  • Plesk Backup Manager – Backs up hosting settings, email, DNS and MySQL databases.
  • Custom shell scripts – Backs up anything not handled by Plesk – there’s a lot of tweaks to config files that happen over the years that I want to preserve. I also manually backup the MySQL databases in addition to Plesk as I’m a little paranoid…

The custom shell scripts create a rolling 30 day set of backups locally on the server. They also store the backups on a special FTP server provided by the ISP for off-server backups. Once a month I take a copy of  the latest backup to my home machine. This is not a quick operation as the backup files are 4 GB…zipped!

After checking the backup scripts I notice that the local FTP upload has not been working correctly due to an ISP infrastructure change. I also find that the newer version of Plesk that I’m using no longer backs up some of the customisations to web hosting config files that I’d made.

Time to triple check the scripts!

Thursday 15th November, 2012

During the rebuild the site is likely to be offline for a number of hours. At some point (after Plesk initially restores the hosting and database settings) the websites will likely be “up” but “broken” as I install the various software packages that the web applications depend on. I don’t want my Google rankings to be hit during this time and I want to minimise the problems that users get trying to use a half-installed site. This Google Webmaster post recommends using the 503 HTTP result code to indicate that the site is temporarily down so that search results don’t get mangled. I added the following code to my hosting configuration and will be enabling it just before I start the full site backup and restore.

ErrorDocument 503 "Our website is temporarily closed for maintenance. Please check back later."
RewriteEngine On
# TO ALLOW YOURSELF TO VISIT THE SITE, CHANGE 111 222 333 444 TO YOUR IP ADDRESS.
RewriteCond %{REMOTE_ADDR} !^111\.222\.333\.444$
RewriteRule .* - [R=503,L]

This will allow me to access the web sites normally with my IP address, but anyone else will get the 503 message.

Friday 16th November, 2012

I took the sites offline with the 503  message above and started the backup to the ISP local FTP site. I also took a copy to my home machine where I manually extracted the files to make sure that they looked OK. I started the ISP server image process for their CentOS 6.3 image. A nice feature of the 1&1 hosting package I use is that you can connect to your server via a SSH Serial console – this means that even if the server is completely broken and not on the network you can still interact with it. And when doing a server rebuild this helps to reassure myself that something is actually happening.

The imaging process took about an hour, and then it took another hour to download and extract the backup from FTP. Unlike my backup scripts I’d don’t have scripts to automate the restore process – usually I’m rebuilding to a different platform so it would be tricky to automate properly. However I have notes for every piece of software I need which documents how I set it up previously. I also keep a master list of Yum packages in a script so that I can reinstall every Yum package quickly.

A summary of the restore procedure is:

  • Run “Yum Update” to ensure that all components are current
  • Reinstall Yum packages from my master list
  • Run some custom scripts that recreate the users and permissions I need
  • Extract backup files from FTP
  • Use the Plesk Backup Manager to restore the website files, basic hosting settings, DNS, email and MySQL databases

At this point the main website functionality was restored and the website and forums could be brought online again which I did by removing my 503 message. The site was usable again for the majority of users and I could restore the rest of the site services over the next few days (Subversion, Trac, etc).

 

The post Website Server Rebuild Diary appeared first on AutoIt Consulting.

]]>
Using Custom XML and Config Files with USMT and ConfigMgr https://www.autoitconsulting.com/site/deployment/using-custom-xml-and-config-files-with-usmt-and-configmgr/ https://www.autoitconsulting.com/site/deployment/using-custom-xml-and-config-files-with-usmt-and-configmgr/#comments Thu, 02 Feb 2012 18:00:45 +0000 http://www.autoitconsulting.com/site/?p=840 Overview Using USMT In System Center Configuration Manager (SCCM / ConfigMgr)  when performing an Operating System Deployment (OSD) can be a bit of a chore. The built in Task Sequence actions available for capturing and restoring user state only allow you to perform a very basic migration. Most of the time USMT requires quite a […]

The post Using Custom XML and Config Files with USMT and ConfigMgr appeared first on AutoIt Consulting.

]]>
Overview

Using USMT In System Center Configuration Manager (SCCM / ConfigMgr)  when performing an Operating System Deployment (OSD) can be a bit of a chore. The built in Task Sequence actions available for capturing and restoring user state only allow you to perform a very basic migration. Most of the time USMT requires quite a bit of  tweaking to get right and once you’ve got everything working in a standalone script it can seem like a step backwards when you see the default ConfigMgr capture and restore actions. When using USMT in anger it usually involves:

  • Creating a custom XML file for migrating customer-specific application settings
  • Creating a custom XML file for migrating user data as per the customer requirements
  • Creating a custom config.xml file to fine tune built-in migration settings
Unfortunately, the USMT options in the ConfigMgr interface are basic and you have to start setting special Task Sequence variables in order to get the ‘clever’ stuff to work. This post details how to achieve that.

These instructions are for ConfigMgr 2012 but they are pretty much identical for ConfigMgr 2007 as well. A state capture and restore is usually done as part of a Operating System Deployment Task Sequence. For the purposes of this walkthrough I’ll be showing a cut down Task Sequence that just captures user state using a ConfigMgr State Migration Point (SMP). Note: I’m also assuming that any custom XML files and a custom config.xml file are copied into the same folder as the USMT package.  (For example. USMT\amd64\config.xml and USMT\amd64\CustomData.xml). To see how to create your USMT package see this previous post.

Task Sequence

The Task Sequence actions we are interested in are:

  • Request State Store – This allows the client to access the ConfigMgr State Migration Point
  • Set OSDMigrateAdditionalCaptureOptions – This sets a special Task Sequence variable to allow the use of custom USMT command line options
  • Capture User State – This performs the state capture
  • Release State Store – This releases access to the ConfigMgr State Migration point

When creating an OSD Task Sequence with User State Migration the Task Sequence automatically includes Request State Store, Capture User State and Release State Store. The Set OSDMigrateAdditionalCaptureOptions is our custom action. The default actions can also be manually created from the User State Task Sequence menu.

OSDMigrateAdditionalCaptureOptions

Let’s look at the Set OSDMigrateAdditionalCaptureOptions step first. This is just a standard action that sets a Task Sequence variable.

The OSDMigrateAdditionalCaptureOptions variable allows us to add our own parameters to the scanstate.exe command line when executed. In this example we set the value to the following:

/uel:30 /ue:* /config:"%_SMSTSMDataPath%\Packages\%_OSDMigrateUsmtPackageID%\%PROCESSOR_ARCHITECTURE%\Config.xml"

Those options in more detail:

  • /uel:30 /ue:* – Standard scanstate.exe options that mean we exclude local accounts, and we exclude accounts not used in the last 30 days
  • /config:”…” – This allows us to specify our custom config.xml file. Unfortunately we must use a full path to the USMT package and config.xml file. The various variables in this allow use to specify the location of the USMT package after it is downloaded to the client so that we can provide a full path to the config.xml file.
  • %_SMSTSMDataPath% – Resolves to the root location of the cached package on the client, e.g. C:\_SMSTaskSequence
  • %_OSDMigrateUsmtPackageID% – Resolves to the package ID, e.g. AUT00002
  • %PROCESSOR_ARCHITECTURE% – Resolves to the build architecture, e.g. amd64 or x86

 Capture User State

The next step is to use the Capture User State task to specify which migration XML files we want to use. We use the Customize how user profiles are captured option and add the filenames of the migration XML files. In this example I want to use a couple of default USMT files (MigApp.xml and MigUser.xml) along with my own custom XML file (CustomData.xml). These files must be stored in the USMT\amd64 or USMT\x86 folder of the USMT package as appropriate.

And that’s it. This example USMT capture will now use our custom command line switches, custom config.xml, and custom migration files along with the ConfigMgr State Migration Point. I’d also recommend using the verbose logging option in the Capture User State action. This means that a log is produced on the client at C:\Windows\CCM\Logs\scanstate.log. This is very handy when trying to get this procedure to work as one of the first log entries is a summary of the command line options that were used.

The procedure for creating custom Restore User State options is similar except the custom Task Sequence variable name is OSDMigrateAdditionalRestoreOptions and the variable used in this would be %_OSDMigrateUsmtRestorePackageID%.

 

The post Using Custom XML and Config Files with USMT and ConfigMgr appeared first on AutoIt Consulting.

]]>
https://www.autoitconsulting.com/site/deployment/using-custom-xml-and-config-files-with-usmt-and-configmgr/feed/ 3
Create a USMT Package in ConfigMgr https://www.autoitconsulting.com/site/deployment/create-a-usmt-package-in-configmgr/ Wed, 01 Feb 2012 11:26:30 +0000 http://www.autoitconsulting.com/site/?p=818 Overview In System Center Configuration Manager (SCCM / ConfigMgr)  when performing an Operating System Deployment (OSD) there is usually some form of user data and settings migration. This is often performed using the Microsoft User State Migration Toolkit (USMT). The first step in using USMT is to create the USMT tool ConfigMgr package. This post […]

The post Create a USMT Package in ConfigMgr appeared first on AutoIt Consulting.

]]>
Overview

In System Center Configuration Manager (SCCM / ConfigMgr)  when performing an Operating System Deployment (OSD) there is usually some form of user data and settings migration. This is often performed using the Microsoft User State Migration Toolkit (USMT). The first step in using USMT is to create the USMT tool ConfigMgr package. This post details how to create such a package for USMT 4.

These instructions are for ConfigMgr 2012 but they are pretty much identical for ConfigMgr 2007 as well.

Create USMT Package

First, get hold of the USMT 4 files. These can be found on the ConfigMgr server in the folder C:\Program Files\Windows AIK\Tools\USMT.

Copy these to your ConfigMgr software library share. In this example we will copy them to \\SERVER\ConfigMgr_SWStore$\OSD\USMT.

Note: In order to properly support migrations to/from Office 2010 you must also download the updated USMT components from this knowledge base article: http://support.microsoft.com/kb/2023591.

Now create a standard ConfigMgr package:

On the program type screen select Do not create a program – the OSD task sequences will handle this for us.

The USMT package is now ready for use in an OSD Task Sequence using the Capture User State and Restore User State actions.

 

The post Create a USMT Package in ConfigMgr appeared first on AutoIt Consulting.

]]>
Create a Windows 7 BitLocker Partition in ConfigMgr https://www.autoitconsulting.com/site/deployment/create-a-windows-7-bitlocker-partition-in-configmgr/ Mon, 30 Jan 2012 21:47:25 +0000 http://www.autoitconsulting.com/site/?p=784 Overview In System Center Configuration Manager (SCCM / ConfigMgr)  something I’ve done a few times is to create a BitLocker partition for Windows 7 during an Operating System Deployment (OSD) Task Sequence. I’ve seen the method used here a few times before but I wanted to document it for myself so that I can use […]

The post Create a Windows 7 BitLocker Partition in ConfigMgr appeared first on AutoIt Consulting.

]]>
Overview

In System Center Configuration Manager (SCCM / ConfigMgr)  something I’ve done a few times is to create a BitLocker partition for Windows 7 during an Operating System Deployment (OSD) Task Sequence. I’ve seen the method used here a few times before but I wanted to document it for myself so that I can use it for an upcoming article on configuring BitLocker with TPM+PIN from ConfigMgr.

Instructions

These instructions are the same for both ConfigMgr 2007 and 2012.

Edit your deployment Task Sequence and select the Partition Disk 0 node. By default there is a single partition, we will create an additional partition so that we end up with two partitions named:

  • System Reserved
  • OS

 

Call the first partition System Reserved and configure it with the following options:

  • Partition type: Primary, Use a specific size (300 MB)
  • Make this a boot partition: Checked
  • File system: NTFS, Quick format

 

For Windows 7 you only need a 100 MB partition for BitLocker, but I prefer to use 300 MB to leave room in case you want to use the Windows Recovery Environment.

Call the second partition OS and configure it with the following options:

  • Partition type: Primary, Use a percentage of remaining free space (100%)
  • File system: NTFS, Quick format
  • Variable: OSPART

 

The variable OSPART can now be used to correctly identify the partition to be used in the Apply Operating System step of the Task Sequence. Configure as follows:

  • Destination: Logical drive letter stored in a variable
  • Variable name: OSPART

 

Now when this Task Sequence runs the disk will be partitioned correctly for future use of BitLocker. Alternatively, you can also just use a single partition and run the BitLocker preparation utility to shrink and partition the drive. This is sometimes useful when using USMT and hard linking to preserve disk contents. Personally, I’m a little paranoid and generally prefer to use USMT to backup to the network and then properly clean and partition the disk as above.

 

The post Create a Windows 7 BitLocker Partition in ConfigMgr appeared first on AutoIt Consulting.

]]>
ConfigMgr 2012 SQL Database Collation https://www.autoitconsulting.com/site/deployment/configmgr-2012-sql-database-collation/ Thu, 26 Jan 2012 11:39:44 +0000 http://www.autoitconsulting.com/site/?p=741 Overview I just tried to update my ‘default’ installation of System Center Configuration Manager (SCCM / ConfigMgr) 2012 RC1 to RC2. I got a failure at the pre-req check stage stating that my SQL 2008 R2 database was using the wrong collation type and that installation couldn’t continue. It turns out that the default collation […]

The post ConfigMgr 2012 SQL Database Collation appeared first on AutoIt Consulting.

]]>
Overview

I just tried to update my ‘default’ installation of System Center Configuration Manager (SCCM / ConfigMgr) 2012 RC1 to RC2. I got a failure at the pre-req check stage stating that my SQL 2008 R2 database was using the wrong collation type and that installation couldn’t continue. It turns out that the default collation that I had used when I installed SQL 2008 R2 (a next…next…next…finish install) was no good. I’m not sure why ConfigMgr 2012 RC1 was happy with that configuration but RC2 certainly wasn’t!

This article states the supported database configuration for ConfigMgr 2012 – it doesn’t appear to mention collation type but apparently this will be updated in the future: http://technet.microsoft.com/en-us/library/gg682077.aspx#BKMK_SupConfigSQLDBconfig

The supported collation type is: SQL_Latin1_General_CP1_CI_AS

Example

For reference, here are some screenshots on how to set this collation type during the installation of SQL Server 2008 R2.

 Collation 1

Collation 2

Collation 3

 

 

The post ConfigMgr 2012 SQL Database Collation appeared first on AutoIt Consulting.

]]>
MDT and ConfigMgr Build Drivers for Windows 7 https://www.autoitconsulting.com/site/deployment/mdt-and-configmgr-build-drivers-for-windows-7/ https://www.autoitconsulting.com/site/deployment/mdt-and-configmgr-build-drivers-for-windows-7/#comments Sun, 03 Jul 2011 09:27:25 +0000 http://www.autoitconsulting.com/site/?p=685 Overview When doing Windows 7 builds in either MDT 2010 or ConfigMgr 2007 you have to download and install a selection of drivers that are to be installed in your images. Some drivers are used for the WinPE boot image so that the boot image can access the local harddisk (a boot critical driver) and […]

The post MDT and ConfigMgr Build Drivers for Windows 7 appeared first on AutoIt Consulting.

]]>
Overview

When doing Windows 7 builds in either MDT 2010 or ConfigMgr 2007 you have to download and install a selection of drivers that are to be installed in your images. Some drivers are used for the WinPE boot image so that the boot image can access the local harddisk (a boot critical driver) and also drivers that allow the machine to connect to the network so that it can continue the MDT/ConfigMgr build process. I tend to use the latest versions of certain boot critical drivers direct from the main vendor (i.e. Intel and Broadcom) for my boot images and references machines. Then, at deployment-time, I’ll inject the vendor-specific drivers on a model-by-model basis. This also helps keep the reference machine ‘clean’ and free from all those bits of add-on software and control panel applets that some vendor drivers inflict on us. It also helps keep the number of drivers injected into the boot image to a minimum, all too often I’ve seen hundereds of vendor versions of the exact same driver injected into a boot image and some of them can be really buggy or bloat the image for no reason.

As these drivers can be a little difficult to find, this post details the download locations of MDT and ConfigMgr build drivers for Windows 7 that will cater for 99% of the machines that you will ever likely encounter. I’ll try to keep it updated whenever the download locations change (I’ll be using this page as a reference point for myself too!).

Intel® Rapid Storage Technology – Boot Critical Driver

This driver will handle Intel RAID and SATA for the vast majority of Intel desktop and laptop machines. Without this driver Windows will be unable to access the local hard disk.

Download Intel® Rapid Storage Technology Driver (select your required OS version in the dropdown list)

To extract the driver files run:

iata_enu.exe -e -p C:\Intel_RST_Driver

Intel® PROSet Wired Network Drivers – Network Driver

These PROSet drivers will handle Intel wired Ethernet on the majority of desktop and laptop machines.

Download Intel® PROSet Wired Network Drivers (select your required OS version in the dropdown list)

To extract the driver files run:

PROWin.exe /f C:\Intel_PROSet_Driver /s /e

Broadcom NetXtreme Desktop Drivers – Network Driver

If a machine doesn’t use the Intel PROSet drivers then it’s almost certainly going to be the Broadcom NetXtreme variety.

Download Broadcom NetXtreme Network Drivers

Intel® 5, 4, 3, 900 Series Chipsets – System Drivers

These Intel chipset drivers aren’t required for boot or reference images but they handle a lot of the ‘unknown’ devices on Intel-chipset machines. I tend to include these as a generic set of drivers for most machines rather than using the vendor versions that are often outdated.

Download the Intel® 5, 4, 3, 900 Series Chipsets Drivers (select your required OS version in the dropdown list and download the .ZIP version)

Summary

Armed with the latest versions of these drivers you should be able to get your MDT or ConfigMgr build process working on most of the machines in common use saving you hours of hunting around the baffling array of drivers on Intel’s website!

 

 

The post MDT and ConfigMgr Build Drivers for Windows 7 appeared first on AutoIt Consulting.

]]>
https://www.autoitconsulting.com/site/deployment/mdt-and-configmgr-build-drivers-for-windows-7/feed/ 1
Setting a UK Keyboard in ConfigMgr 2007 OSD https://www.autoitconsulting.com/site/deployment/setting-a-uk-keyboard-in-configmgr-2007-osd/ Tue, 28 Jun 2011 20:46:54 +0000 http://www.autoitconsulting.com/site/?p=659 Overview In the UK, we actually get the US version of Windows 7 and Office 2010, so whenever I do any build engineering in either MDT or ConfigMgr one of the first things I need to customize are the regional settings. In particular the keyboard layout and location. There are many ways to achieve this […]

The post Setting a UK Keyboard in ConfigMgr 2007 OSD appeared first on AutoIt Consulting.

]]>
Overview

In the UK, we actually get the US version of Windows 7 and Office 2010, so whenever I do any build engineering in either MDT or ConfigMgr one of the first things I need to customize are the regional settings. In particular the keyboard layout and location. There are many ways to achieve this ranging from manually making the change in a reference image or creating an unattend.xml file. In this post I will show how you can set regional settings for Windows 7 at deploy-time using ConfigMgr 2007 OSD and collection variables. This will allow you to automatically have different settings depending on which ConfigMgr collection a machine belongs to. The process shown in this post is for UK settings, but the method will work equally well for any other regional requirements.

Collection Settings

First, on a collection used for OSD deployment select Modify Collection Setttings and goto the Collection Variables tab. Enter the following variables:

  • OSDSystemLocale: en-GB
  • OSDInputLocale: en-GB
  • OSDUserLocale: en-GB
  • OSDUILanguage: en-US
  • OSDUILanguageFallback: en-US

Note: A common error is trying to use a value of en-GB for the UILanguage – there is no such UI setting, Brits have to make do with American spellings in our Microsoft products… The collection variables should now look like this:

Unattend File

Now create a simple unattend.xml file using the Windows Automated Installation Kit and the Windows System Image Manager tool. You need to:

  • Add the component amd64_Microsoft-Windows-International-Core_neutral to the specialize section. (use the x86 component if you are using a 32 bit version of Windows 7)
  • Enter the ‘OSD…’ variables we created above in the relevant sections wrapping them with % signs, for example %OSDSystemLocale% for the SystemLocale entry.

When finished it should look something like this:

Save the unattend file in a suitable folder (I’m saving it in \\server\ConfigMgr_SWStore$\OSD\Unattend\Win7SP1x64_Unattend.xml – using the folder structure suggested in this post). I tend to put various unattend.xml files for different uses in the same folder then I create a single package for them all for simplified use – in this example I’ll call it OSD Unattend Files.

Task Sequence

The final step is to attach this unattend.xml file to your OSD Task Sequence. Select the Apply Operating System task and enter the package and unattend file name in the Use an unattended or sysprep answer file… boxes:

Now when you use this task sequence the collection variables will be expanded inside the unattend file and voila – per collection regional settings! You can also set these same variables directly on computer objects if you need to specialise for an individual computer.

 

The post Setting a UK Keyboard in ConfigMgr 2007 OSD appeared first on AutoIt Consulting.

]]>
ConfigMgr 2007 OSD Folder Structure https://www.autoitconsulting.com/site/deployment/configmgr-2007-osd-folder-structure/ Sun, 26 Jun 2011 10:38:55 +0000 http://www.autoitconsulting.com/site/?p=624 Overview When configuring ConfigMgr 2007 Operating System Deployment (OSD) there are an endless number of applications, operating system images, operating system install files, drivers, source files, package files, temp files. Everytime I set it up I end up with a different structure so this blog post presents an example ConfigMgr 2007 OSD folder structure that […]

The post ConfigMgr 2007 OSD Folder Structure appeared first on AutoIt Consulting.

]]>
Overview

When configuring ConfigMgr 2007 Operating System Deployment (OSD) there are an endless number of applications, operating system images, operating system install files, drivers, source files, package files, temp files. Everytime I set it up I end up with a different structure so this blog post presents an example ConfigMgr 2007 OSD folder structure that can be used for general application and operating system deployment files. This post will be updated as I get better ideas and suggestions on how to organise things – but the main issue is that there is no ‘right’ way of organising files that everyone can agree on so this is as good a place as any!

File Share

First I create a single share with the following properties:

Setting Config Notes
Share Name ConfigMgr_SWStore$ I use $ just to make the share hidden from casual browsing.
Share Security Everyone: Full Control
NTFS Security SYSTEM: Full ControlAdministrators: Full ControlConfigMgr Site Servers: ModifyConfigMgr Network Access Account: Modify I usually create a group containing the computer accounts of the ConfigMgr site servers which makes applying permissions easier. Also, the Network Access Account (configured in the Computer Client Agent properties) will require access during OSD.Depending on your security polcies you may also wish to remove users from accessing this share.

Folder Structure

Then create a folder structure as shown in this image:

The top level folders are split into Apps, OSD and Updates.

Apps Folder

Folder Description
Apps \ App-V
The main folder for App-V related files.
Apps \ App-V \ Source
App-V source packages used when first importing the application into ConfigMgr.
Apps \ App-V \ Packages The resulting App-V package Data Source.
Apps \ Native Native packages (Data Source for MSIs, .exes, batch files, etc.)
Apps \ Native \ Microsoft \ Office 2010 Pro
Apps \ Native \ Adobe \ Reader X

OSD Folder

Folder Description
OSD \ Install
Operating System Install Packages (full set of source files with setup.exe used for creating the reference image).
OSD \ Install \ Windows 7 Enterprise x64 SP1
OSD \ Install \ Windows 7 Enterprise x86 SP1
OSD \ Image Operating System Image (.wim) files – the reference images – used for deployment.
OSD \ Unattend Unattend.xml files and packages used for customizing deployment.
OSD \ Capture Temporary location used for storing the reference image during capture. After capture move the resulting image file into the Image folder.
OSD \ Drivers Device drivers.
OSD \ Drivers \ Source Source files for device drivers used when importing drivers into the ConfigMgr interface.
OSD \ Drivers \ Packages Resulting driver packages (Data Source).
OSD \ Tools Source files for tool packages such as MDT or USMT.
OSD \ Tools \ USMT Source files for the User State Migration Toolkit (USMT).
OSD \ Tools \ MDT_Toolkit Source files for the User State Migration Toolkit (USMT).
OSD \ Tools \ MDT_CustomSettings MDT CustomSettings.ini files.

Updates Folder

Folder Description
Updates
Folder for storing Software Updates packages.
Updates \ All Updates 2010
Example package.
Updates \ All Updates 2011 Example package.

Here is a zip file of the entire folder structure you can use as a template: ConfigMgr_SWStore.zip

 

The post ConfigMgr 2007 OSD Folder Structure appeared first on AutoIt Consulting.

]]>
Get Selected Microsoft TechNet Library Content Offline https://www.autoitconsulting.com/site/deployment/get-selected-microsoft-technet-library-content-offline/ Wed, 15 Jun 2011 12:42:31 +0000 http://www.autoitconsulting.com/site/?p=574 Overview No-one can ever know everything they need to know about Microsoft products and techies will freely admit that they spend half their time checking resources such as Microsoft TechNet while working on a problem or design. Sometimes I need to go on to a site where I won’t have internet access. As anyone who […]

The post Get Selected Microsoft TechNet Library Content Offline appeared first on AutoIt Consulting.

]]>
Overview

No-one can ever know everything they need to know about Microsoft products and techies will freely admit that they spend half their time checking resources such as Microsoft TechNet while working on a problem or design. Sometimes I need to go on to a site where I won’t have internet access. As anyone who relies on access to online resources will know, this can really slow things down and small questions (such as “is it port 60 or 67?”) can be frustrating.

There is a free tool called Package This that can turn selections from the online TechNet library into a compiled help file (.chm) that you can then read and search offline! Yes, it seems to have a few bugs but if you keep the selections small it seems to work very well indeed.

As an example I will demonstrate downloading the Operating System Deployment section of the Config Mgr 2007 documentation. Here are the steps to create a .chm from selected content:

  1. Download and install HTML Help Workshop (If running Windows Vista and later you will get a message after installation saying that you already have a later version installed – that’s fine, but Package This doesn’t seem to be able to use it until you’ve attempted to install this older version).
  2. Download the latest version of PackageThis.exe.
  3. Run PackageThis.exe.
  4. Select the TechNet Library from the Library menu.
  5. Find the content you want – you can either choose single pages or get a whole section by using Select This Node and All Children. Note: it’s best to keep the selections small as the program has a few bugs and will sometimes not work correctly. 
  6. The selected articles will be downloaded – this can take some time depending on the number selected.
  7. Once all the content has been download select Export to Chm File… from the File menu.
  8. Enter the filename and title for the .chm file and click .

 

If all goes well you should now have a .chm file with the selected content that you can access when away from the Internet! As a bonus it’s even searchable.

 

The post Get Selected Microsoft TechNet Library Content Offline appeared first on AutoIt Consulting.

]]>
App-V Recipe: AutoIt v3 https://www.autoitconsulting.com/site/deployment/app-v-recipe-autoit-v3/ Fri, 20 May 2011 11:39:30 +0000 http://www.autoitconsulting.com/site/?p=538 Overview This blog post will show you how to sequence AutoIt using App-V 4.6 SP1 (the latest version at the time of writing). Now, AutoIt is probably the easiest application to sequence in the world and doesn’t really need a blog post describing how to do it. However, I intend to write some more posts […]

The post App-V Recipe: AutoIt v3 appeared first on AutoIt Consulting.

]]>
Overview

This blog post will show you how to sequence AutoIt using App-V 4.6 SP1 (the latest version at the time of writing). Now, AutoIt is probably the easiest application to sequence in the world and doesn’t really need a blog post describing how to do it. However, I intend to write some more posts around using App-V in System Center Configuration Manager (ConfigMgr) 2007 for deployment and upgrades so a nice easy to follow recipe for an application that is free should be useful.

For sequencing I’ll be using Windows XP SP3 and creating a package that can be used on all later operating systems. I’ve set up the Windows XP machine as per the recommendations in App-V 4.6 SP1 Sequencing Guide – essentially all AV has been turned off, Windows update is off, and the App-V 4.6 SP1 sequencer has been installed. In my case the App-V sequencer machine is a Hyper-V virtual machine so that I can use snapshots to quickly get it back to a clean state.

Recipe

  1. Download the AutoIt full installer from this page.
  2. Run the Microsoft Application Virtualization Sequencer.
  3. Select Create a New Virtual Application Package.
  4. Select Create Package (default).
  5. On the Prepare Computer screen check that there are no warnings (Windows Defender is running, etc.) and then click Next.
  6. Select Standard Application (default) and click Next.
  7. At the Select Installer screen browse to the AutoIt installer (it should be something like ‘autoit-v3.3.14.2-setup.exe’) and then click Next.
  8. In the Package Name screen enter the name AutoIt v3 as the package name. Note that this automatically generates the ‘Primary Virtual Application Directory’ of Q:\AutoIt v3. (in App-V 4.6 you no longer have to use 8.3 filenames so this automatically generated name is OK). Click Next.
  9. The AutoIt installer will start. Use the defaults for all installation questions except for the installation folder which must be changed to match the Primary Virtual Application Directory of Q:\AutoIt v3.
  10. At the end of installation deselect Show release notes and then click Finish.
  11. Back in the App-V sequencer, select I am finished installing and then click Next.
  12. Select the following tasks to run: AutoIt Window Info, Compile Script to .exe, Run Script, SciTE Script Editor. Then click Run Selected.
  13. Close down all the launched applications then back in the App-V Sequencer click Next.
  14. Review the Installation Report and then click Next.
  15. We need to customize the package a little so select Customize and then click Next.
  16. Remove shortcuts as required for your corporate environment and then click Next. I would recommend removing ‘AutoIt v3 Website’ and ‘Check for Updates’.
  17. At the Prepare for Streaming page run the same applications as shown in step 12, close them after launch, and then click Next.
  18. Select Allow this package to run on any operating system and then click Next. (In theory, you could also create a specific 64-bit package as the AutoIt installer only installs 64-bit components when installed on a 64-bit machine, but the 32-bit version is fine for 99% of cases).
  19. Select Save the package now and Compress Package and click Create. Optionally enter the version name in the comments field.
  20. Click Close and you’re done!

 

The post App-V Recipe: AutoIt v3 appeared first on AutoIt Consulting.

]]>
Windows 7 Aero Theme Not Enabled After Deployment https://www.autoitconsulting.com/site/deployment/windows-7-aero-theme-not-enabled-after-deployment/ https://www.autoitconsulting.com/site/deployment/windows-7-aero-theme-not-enabled-after-deployment/#comments Fri, 13 May 2011 15:02:52 +0000 http://www.autoitconsulting.com/site/?p=488 Overview An issue I saw a number of times at customers was that the Windows 7 Aero theme was not enabled after deployment. I saw this numerous times in early workshops and pilots for customers and completely forgot about it until I saw the question asked again a few weeks ago. So here is a […]

The post Windows 7 Aero Theme Not Enabled After Deployment appeared first on AutoIt Consulting.

]]>
Overview

An issue I saw a number of times at customers was that the Windows 7 Aero theme was not enabled after deployment. I saw this numerous times in early workshops and pilots for customers and completely forgot about it until I saw the question asked again a few weeks ago. So here is a post describing the sort of symptoms you might see and how to workaround it.

Aero Troubleshooting

There are two main tools that people turn to when troubleshooting Aero:

  • Running the ‘Find and fix problems with transparency and other visual effects‘ troubleshooter that attempts to automatically find and fix issues.
  • Re-running the Desktop Window Manager (DWM) WinSAT test (either through the standard interface or by running ‘winsat dwm’ from the command line).

In the case of this particular issue both of these methods will not solve the problem and simply report that ‘DWM not running‘ or ‘Desktop Window Manager is disabled‘ and the system will still not enable Aero. The strange thing is you can get Aero by doing the following:

  • Right-click the desktop, select Personalize.
  • Re-select the standard Windows 7 theme (or any other Aero theme).
  • Aero magically starts working again.

Obviously, manually selecting the Windows 7 theme on each machine isn’t going to work when you are deploying more than a couple of machines, but there are two simple solutions.

Solution 1

The easiest solution for an enterprise customer is probably going to be by using Group Policy:

  1. Find the path of a theme we want to use – aero.theme is the standard Windows 7 theme and can be found in C:\Windows\Resources\Themes\aero.theme
  2. Open the Group Policy console (gpmc or local group policy as required).
  3. Find the policy User Configuration \ Administrative Templates \ Personalization \ Load a specific theme
  4. Enter the path of the theme, we will use an environment variable so that it works on all machines: %windir%\Resources\Themes\aero.theme
  5. Save the policy and apply it to the relevant Organizational Units (OUs).

Note: This only changes the theme for a user’s first-time logon which makes it quite a neat solution.

Solution 2

The second solution can be used when deploying Windows 7 using an unattend.xml answer file. To make the required changes:

  1. Open the unattend.xml answer file in the Windows System Image Manager which is installed with the Windows Automated Installation Kit (WAIK).
  2. Add the component amd64_Microsoft-Windows-Shell-Setup_neutral to the Specialize pass (use the x86 version if not using Windows 7 64-bit).
  3. Find the Themes \ ThemeName entry.
  4. Add the Aero theme name you wish to use – use just the main part of the filename (see the Windows Theme folder screenshot above). For the standard Windows 7 theme the value is ‘aero‘.
  5. Save the unattend.xml file and deploy!

 

The post Windows 7 Aero Theme Not Enabled After Deployment appeared first on AutoIt Consulting.

]]>
https://www.autoitconsulting.com/site/deployment/windows-7-aero-theme-not-enabled-after-deployment/feed/ 1
Windows 7 Self Optimizing Boot Process https://www.autoitconsulting.com/site/performance/windows-7-self-optimizing-boot-process/ https://www.autoitconsulting.com/site/performance/windows-7-self-optimizing-boot-process/#comments Fri, 06 May 2011 09:50:34 +0000 http://www.autoitconsulting.com/site/?p=422 Overview You may have heard the about the fact that Windows 7 (and actually, Windows Vista too) has improved boot up times over Windows XP and dismissed it as “that there marketing speak”. Surprisingly, it’s actually true! On top of that, the boot process is also optimized over time to make things even faster. This […]

The post Windows 7 Self Optimizing Boot Process appeared first on AutoIt Consulting.

]]>
Overview

You may have heard the about the fact that Windows 7 (and actually, Windows Vista too) has improved boot up times over Windows XP and dismissed it as “that there marketing speak”. Surprisingly, it’s actually true! On top of that, the boot process is also optimized over time to make things even faster. This blog post gives a high-level overview of how this works and also provides some actual measurements.

The descriptions given in this article are fairly high-level for a number of reasons. I wanted it to be readable for the general IT admin, and hard documentation on the exact workings of this stuff is virtually non-existent so it contains quite a lot of educated guesses. The epic Windows Internals book discusses both the logical prefetching and ReadyBoot mechanics in a lot of detail, but many of the services and registry keys mentioned in that book reference Windows Vista and they no longer apply to Windows 7 (although the functionality is still there and has been improved, it’s just less obvious which services and keys now control the process).

Logical Prefetcher

Various analysis of boot tracing have shown that one of the main factors that slows the boot process down are disk seek times. As the various boot files, dlls, and drivers are loaded there are lots of page faults and disk seek requests as different parts of files and directories are accessed. Windows 7 keeps track of what files were accessed and the location of these files on the disk and this tracing continues for up to 120 seconds after boot, or 30 seconds after the user’s shell (explorer.exe) starts, whichever comes first. These traces are stored in C:\Windows\PreFetch:

Logical Prefetcher Trace Folder

Each trace contains a list of the files and directories accessed when a given application starts (or during boot) and this information is used to give the prefetcher a chance to prefetch all the data required in one go, so that the loading of the application can be optimized.

In addition, any files referenced by these boot applications (DLLs, SYS files, etc) are also tracked in C:\Windows\PreFetch\Layout.ini. Every few days, when the system is idle, defrag.exe is called with a command-line parameter that causes all these referenced files to be moved to a contiguous area of the disk. This means that the prefetching of these files is much more efficient and further improves the boot time. You can manually invoke defrag to optimize boot files by running the following command (Windows 7 only, the parameters are different on Windows Vista):

defrag.exe c: /b

(This will only work after the machine has been booted around 6 times, otherwise you will get an error about the lack of boot optimization files)

Note: When fully optimized this defrag command should complete quickly (a couple of minutes or so) as the boot files will already be in a contiguous area of the disk. However, I’ve seen machines many months old that have taken up to an hour for this defrag command to complete which leads me to believe that the automatic idle processing may not actually work correctly in all situations. Therefore, it’s a good idea to run the command manually.

You can see the last time the automatic boot defrag occurred by checking the value of the registry key HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Prefetcher\LastDiskLayoutTimeString. Unfortunately, this value doesn’t appear to be changed when you run the defrag command manually.

ReadyBoot

According to the Windows Internals book, the logical prefetching described above is used when the system has less than 512MB of memory. If the system has 700MB or more then an in-RAM cache is used to further optimize the boot process (it’s not clear from the book whether or not this ReadyBoot cache completely replaces the logical prefetching approach or just builds on it, my assumption is that both work together). After each boot the system generates a boot caching plan for the next boot using file trace information from up to the five previous boots which contains details of which files were accessed and where on the disk they were located. These traces are stored as .fx files in the C:\Windows\PreFetch\ReadyBoot folder.

ReadyBoot Trace Folder

 

Services

Under Windows 7, the service that handles ReadyBoot optimization is part of the Superfetch service.

Under Windows Vista, ReadyBoot optimization was part of the ReadyBoost service (ironically, when Windows Vista came out the advice on “tweaking” sites was to disable ReadyBoost if they weren’t going to use USB/ReadyBoost to improve performance – errrm, no…)

There are some semi-undocumented registry keys that control the prefetch and superfetch operations, but in all but the most exeptional cases they should not be touched. They are only documented on the Windows 7 embedded sites for Disabling Superfetch and Disabling Prefetch. The default value for boot these keys is 3 which enables both boot and appliction prefetching.

An Example

So, does this prefetching and boot optimizing actually work? The following table shows the boot times on an ancient Toshiba Tecra M5 (circa 2004) through its first boot, the subsequent five boots and finally after the disk is defragged with the boot optimization switch. The machine is using a vanilla version of Windows 7 Ultimate SP1 x86 – no applications are installed. The boot time is measured using the bootDoneViaPostBoot metric generated from the Windows Performance Toolkit (WPT) (The boot logging process is detailed in this previous blog post)

Boot Cycle Boot Time (bootDoneViaPostBoot) Boot Time Improvement
1 (first boot after installation) 85 seconds N/A
2 (first boot training) 73 seconds 14%  (+14%)
3 31 seconds 63%  (+49%)
4 31 seconds 63%  (+0%)
5 28 seconds 67%  (+4%)
6 (last boot training) 26 seconds 69%  (+2%)
7 (after boot optimizing defrag) 24 seconds 72%  (+3%)

You can see the massive improvement that occurs in the first few boots and then some smaller but significant gains in the later boots and the final defrag optimization. This example used a vanilla machine with very few drivers installed and no additional applications. The more drivers and software installed the more files must be accessed during boot which means these optimizations are likely to have an even more pronounced effect.

As this optimization is automatic it’s not something that most people will need to worry about. But if you are building machine that you would like to boot as quickly as possible from the moment it is deployed (appliance PCs, laptops, etc.) then it may be worthwhile adding a “boot training” and defrag task to your deployment processes.

 

The post Windows 7 Self Optimizing Boot Process appeared first on AutoIt Consulting.

]]>
https://www.autoitconsulting.com/site/performance/windows-7-self-optimizing-boot-process/feed/ 1
Windows Performance Toolkit: Simple Boot Logging https://www.autoitconsulting.com/site/performance/windows-performance-toolkit-simple-boot-logging/ Thu, 28 Apr 2011 22:47:01 +0000 http://www.autoitconsulting.com/site/?p=276 Overview Troubleshooting slow boots and logons are a common request. In this post I will show you how to perform boot logging using the Windows Performance Toolkit (WPT) on a Windows 7 machine and perform some basic analysis of it. Preparation First, you need to install the WPT on the machine you wish to examine. […]

The post Windows Performance Toolkit: Simple Boot Logging appeared first on AutoIt Consulting.

]]>
Overview

Troubleshooting slow boots and logons are a common request. In this post I will show you how to perform boot logging using the Windows Performance Toolkit (WPT) on a Windows 7 machine and perform some basic analysis of it.

Preparation

First, you need to install the WPT on the machine you wish to examine.

Sysinternals Autologon

Secondly, we will be tracing the boot process all the way until the user has logged in and the desktop is shown. If we rely on quickly and manually logging in we introduce inconsistencies to any timings we do. The simplest solution is to use the Sysinternals Autologon tool available from the Sysinternals site and to configure it with the local or domain user we will be using for testing.

Performing the Boot Trace

    1. Logon to the machine as an administrative user.
    2. Use AutoLogon to setup the test user that will be used to automatically login during the trace. The test user need not be an administrator, but if not you will need to respond to any UAC prompts during the process to allow the tools to elevate to complete the trace.
    3. Create a local folder, for example C:\PerfTrace, to store the boot trace.
    4. Open an Administrator command prompt and change to the trace folder created above (cd C:\PerfTrace).
    5. Run the command:
xbootmgr -trace boot
  1. The machine will automatically shutdown, reboot and finally login.
  2. A “Delaying for boot trace” message will appear and the system will pause for 120 seconds to capture post-logon events.
  3. The tool will now elevate and a UAC consent box or prompt for credentials will appear.
  4. The trace will be completed and the trace files will be written into C:\PerfTrace\boot_BASE+CSWITCH_1.etl.

Analysing the Boot Trace

You can look at the boot trace in two main ways. The first way is to export the trace into XML which allows you to see the main timing points and the second is using the xperfview GUI.

Analysing using the XML Summary

To export the XML summary run the following command with the trace captured in the previous section:

xperf -i boot_BASE+CSWITCH_1.etl -o summary.xml -a boot

The resulting XML file can be opened in Internet Explorer (or your favourite XML editor). In order to expand and contract the individual nodes in IE you will need to allow active content by clicking on the yellow information warning box at the top of the screen.  Contract all nodes apart from the those in the “timing” node to show the following view:

The two most immediately useful metrics are:

  • bootDoneViaExplorer – Duration of the boot (in milliseconds) until the start of Explorer.exe
  • bootDoneViaPostBoot – Length of the boot transition including PostBoot. This metric represents the total time of a boot transition.

In this example, bootDoneViaPostBoot would seem to indicate that the total boot time was 50 seconds (50094 millseconds). However, a boot trace waits for 10 seconds (postBootRequiredIdleTime) at the end of a boot until the system reaches an idle state. Therefore to get the actual total boot time we must subtract 10 seconds, in this example the adjusted boot time was 40 seconds.

Analysing using the xperfview GUI

To use a GUI to examine the boot trace open the trace in xperfview with the following command:

xperfview boot_BASE+CSWITCH_1.etl

There are many different views to look at in the xperfview GUI, but for this post we will concentrate on the main boot and logon processes (similar to the XML summary). Scroll down to the Winlogon section:

There are many different checkpoints here but some useful ones are:

  • GP Client – This checkpoint occurs at a number of different points. Before the user logs in (Computer Group Policy) and after logon (User Group Policy). It is very useful to identify any GPO related problems.
  • CreateSession Notification – This checkpoint occurs when the user enters their credentials and starts the logon process.
  • Profiles – This checkpoint occurs when the user’s profile is being loaded.
  • StartShell Notification – This is the last checkpoint when the shell is ready to load and explorer.exe is about to be launched. It corresponds to the WinlogonInit endTime entry from the XML summary.

Summary

This post showed how to perform boot logging using WPT at the most basic level. This can be a very complicated process and far too much to cover in a single post, future articles will go into more detail in individual areas.

 

The post Windows Performance Toolkit: Simple Boot Logging appeared first on AutoIt Consulting.

]]>
Windows Performance Toolkit: Installation https://www.autoitconsulting.com/site/performance/windows-performance-toolkit-installation/ Fri, 22 Apr 2011 22:30:33 +0000 http://www.autoitconsulting.com/site/?p=229 Overview The Windows Performance Toolkit (WPT) is a suite of tools designed for measuring and analysing system and application performance on Windows XP, Windows Vista, Windows 7 and Windows Server 2008. They can be used by enterprises to log and analyse clients in order to detect and optimise performance problems. I intend to use the […]

The post Windows Performance Toolkit: Installation appeared first on AutoIt Consulting.

]]>
Overview

The Windows Performance Toolkit (WPT) is a suite of tools designed for measuring and analysing system and application performance on Windows XP, Windows Vista, Windows 7 and Windows Server 2008. They can be used by enterprises to log and analyse clients in order to detect and optimise performance problems. I intend to use the WPT in a number of upcoming articles so this post will cover how to obtain and install it.

Unfortunately, the WPT is included as part of the Windows 7 SDK and cannot be downloaded separately, and the SDK download is a whopping 2.5GB! However, by following these steps it is possible to download only the minimal files required to gain access to the WPT MSI files. These MSI files can then be internally redistributed for easy installation of WPT.

Installation

  1. Download the Microsoft Windows SDK for Windows 7 and .NET Framework 4 (it’s a bootstrap setup and is only around 500KB).
  2. Run the file winsdk_web.exe.
  3. Accept all defaults until the Installation Options screen is reached.
  4. Deselect all components except Redistributable Packages \ Windows Performance Toolkit. (Depending on the client, you may not have the option to deselect the .NET 4 tools)

    Windows 7 SDK Options for WPT

    Windows 7 SDK Options for WPT

  5. Complete the installation.
  6. Browse to C:\Program Files\Microsoft SDKs\Windows\v7.1\Redist\Windows Performance Toolkit. Copy the wpt_x64.msi and wpt_x86.msi files.

    WPT Redist Files

    WPT Redist Files

These wpt_*.msi files can now be used to install the WPT on any client machine and should be kept in a safe place so that you don’t need to download the SDK each time.

 

The post Windows Performance Toolkit: Installation appeared first on AutoIt Consulting.

]]>
Welcome to the AutoIt Consulting Blog! https://www.autoitconsulting.com/site/site-news/welcome-to-the-autoit-consulting-blog/ Sun, 10 Apr 2011 10:12:51 +0000 http://www.autoitconsulting.com/site/?p=128 Overview Welcome to the AutoIt Consulting Blog! This blog will share tips, tricks and scripts related to Microsoft Windows deployment and related technologies. Until we start blogging in earnest, here are some links to some articles done by myself (Jonathan Bennett) for my former employer Microsoft MCS on the excellent Deployment Guys blog. Send us […]

The post Welcome to the AutoIt Consulting Blog! appeared first on AutoIt Consulting.

]]>
Overview

Welcome to the AutoIt Consulting Blog!

This blog will share tips, tricks and scripts related to Microsoft Windows deployment and related technologies.

Until we start blogging in earnest, here are some links to some articles done by myself (Jonathan Bennett) for my former employer Microsoft MCS on the excellent Deployment Guys blog. Send us an email if you have any requests for the sort of content you would like to see.

Previous Posts on the Deployment Guys

Windows 7 VDI Optimization

GImageX

Dealing With Duplicate User Profile Links in Windows Vista

Working with crashdumps – debugger 101

 

The post Welcome to the AutoIt Consulting Blog! appeared first on AutoIt Consulting.

]]>