Skip to main content

Creating an Error Log: Option 2 - Use an XML file

This is day 5 of how to create an error log.

I normally advocate the usage of XML files for storing data, but I’ll be honest.  I prefer CSV when it comes to my log files.  The ability to append to the file is why.  It makes coding so much easier.  We are going to look at what needs to be done if we want to use XML instead of CSV. 

XML cannot be appended to.  This means that if we want to keep the data from a previous execution, then we need to read it into an array.  We must then add a new instance to that array for each error that occurs and then write the entire array back to the error log at the end of the script.

Function Test-ADUsers
{
    [CmdletBinding()]
    Param (
        [parameter(Mandatory=$true)]
        [String[]]
        $Names,

        [Switch]
        $NoAppend

# Begin Support Functions
    )
    Function Test-ErrorLog
    {
    [CmdletBinding()]
    Param (
        [parameter(Mandatory=$true)]
        [String]
        $Path,

        [parameter(Mandatory=$true)]
        [String]
        $Name
        )

        # Test the path.
        If (!(Test-Path -Path $Path))
        {
            Write-Verbose "Creating the directory $Path"
            New-Item -Path $Path -ItemType Directory
        }

        # Test the file
        If (!(Test-Path -Path "$Path\$Name"))
        {
            Write-Verbose "Creating the file $Name"
            New-Item -Path "$Path\$Name" -ItemType File
        }

    } # END: Function Test-ErrorLog

# End Support Functions

    # Initialize the array to hold the XML data.
    $Data = @()

    # Verify that the error log is present.
    Test-ErrorLog -Path c:\ps\error -Name Errorlog.xml

    # If -NoAppend is TRUE, then clear the error log.
    If($NoAppend)
    {
        Write-Verbose "Clearing the error log"
        Remove-Item -Path C:\PS\error\ErrorLog.xml -Force
        New-Item -Path C:\PS\error -Name ErrorLog.xml -ItemType File     
    }
    Else
    {
        # Import in the XML data is we are appending to the origional file.
        $Data += Import-Clixml -Path C:\ps\error\Errorlog.xml
    }

    ForEach ($Name in $Names)
    {
        Try {Get-ADUser -Identity $Name -ErrorAction Stop}
        Catch
        {
        
            $Data += $Error[0]
        }
    }

    # Commit the XML data to Disk
    $Data | Export-Clixml -Path C:\ps\error\ErrorLog.xml
} # END: Function Test-ADUsers

Test-ADUsers -Names "Administrator", "Bad", "Administrator"

We made a few changes.  First off, we initialize an empty array to store the XML data in that we are about to read.  Our Test-ErrorLog –Name parameter has its file extension changed to .XML.  As a matter of fact, all of our calls to the error log is now .XML instead of .CSV. 

The next big change is in the IF statement where we are testing to see if we are appending or not.  We added an ELSE statement.  This reads the objects in from the XML file and adds them to our array.  Remember, we cannot append directly to an XML file so we need to read the contents into memory before we can proceed.

Now, take a look at the changes in the CATCH block.  Very simple.  I choose not to use the –ErrorVariable parameter because I’ve noticed that we generally get more detail by using PowerShell’s built in capability.  The array $Error[0] is the most recent error received.  We simply add it to the array.

After we exit the ForEach loop, we commit the array to the XML file.  Once you run the code, use Import-CliXML to read the objects back into memory for processing.

The downside of XML is that you must read the current file in if you want to preserve it.  The upside is that you can store much more detailed information.

Well, that is it.  We generate basic log files in my PowerShell class, but I have a feeling that we will be changing that exercise to something a little more advance.




Comments

Jerris Heaton said…
Just curious... Do you prefer XML to JSON? If so, why? Seems JSON is easier to read when looking at it in raw format than XML. Is XML easier to manipulate in PowerShell vs. JSON?
Jerris,

Mostly a habit. I generally do not manually read my data files. Utilziing the ConvertTo-Json and ConvertFrom-Json cmdlets will let you utilize JSON.

Have a good evening,
Jason

Popular posts from this blog

Adding a Comment to a GPO with PowerShell

As I'm writing this article, I'm also writing a customization for a PowerShell course I'm teaching next week in Phoenix.  This customization deals with Group Policy and PowerShell.  For those of you who attend my classes may already know this, but I sit their and try to ask the questions to myself that others may ask as I present the material.  I finished up my customization a few hours ago and then I realized that I did not add in how to put a comment on a GPO.  This is a feature that many Group Policy Administrators may not be aware of. This past summer I attended a presentation at TechEd on Group Policy.  One organization in the crowd had over 5,000 Group Policies.  In an environment like that, the comment section can be priceless.  I always like to write in the comment section why I created the policy so I know its purpose next week after I've completed 50 other tasks and can't remember what I did 5 minutes ago. In the Group Policy module for PowerShell V3, th

Return duplicate values from a collection with PowerShell

If you have a collection of objects and you want to remove any duplicate items, it is fairly simple. # Create a collection with duplicate values $Set1 = 1 , 1 , 2 , 2 , 3 , 4 , 5 , 6 , 7 , 1 , 2   # Remove the duplicate values. $Set1 | Select-Object -Unique 1 2 3 4 5 6 7 What if you want only the duplicate values and nothing else? # Create a collection with duplicate values $Set1 = 1 , 1 , 2 , 2 , 3 , 4 , 5 , 6 , 7 , 1 , 2   #Create a second collection with duplicate values removed. $Set2 = $Set1 | Select-Object -Unique   # Return only the duplicate values. ( Compare-Object -ReferenceObject $Set2 -DifferenceObject $Set1 ) . InputObject | Select-Object – Unique 1 2 This works with objects as well as numbers.  The first command creates a collection with 2 duplicates of both 1 and 2.   The second command creates another collection with the duplicates filtered out.  The Compare-Object cmdlet will first find items that are diffe

How to list all the AD LDS instances on a server

AD LDS allows you to provide directory services to applications that are free of the confines of Active Directory.  To list all the AD LDS instances on a server, follow this procedure: Log into the server in question Open a command prompt. Type dsdbutil and press Enter Type List Instances and press Enter . You will receive a list of the instance name, both the LDAP and SSL port numbers, the location of the database, and its status.