Project Poindexter: (Non-Citrix Related) Grabbing Pix URL logs checking them for malware.

This is my first non-Citrix related post, I don’t plan on making it a habit but someone suggested that I post this in case it is valuable to other INFOSEC types. 

Let me start off by saying I am not a traditional security guy, I don’t have an abundance of hacking skills, I am not a black hat, white hat etc. I did work in Security for awhile as the Event Correlation guy for a year and have been trying to leverage digital epidemiology as a way to secure my systems. As I have stated in previous blogs, we have a better chance of curing the common cold than getting rid of malware and 0-day’s. In fact, I would say there are two kinds of systems, breached and about to get breached. This is the way you have to approach malware in my opinion. What surprised me with the Aurora breach was that it appears as though the INFOSEC community spends the lion’s share, if not all, of their time on ingress and completely ignores egress. When I look at the Google breach I see an attack that should have been mitigated within 24 hours.

Over the years I have deployed or viewed a number of event correlation utilities, most of them costing in excess of $250K for a large implementation.  What I generally did not like about shrink wrapped solutions and what I am most concerned about in the IT industry is the de-emphasis on heuristics and a dependance on an automated process to detect a problem.  In my opinion, an “Event Correlator” is not an appliance, it is an IT Person looking at a series of logs and events and saying “Holy shit! What the HELL is that!”.  The fact is, false positives make a lot of really expensive security software completely useless and a stored procedure or IDS/IPS cannot do as good of a job as a human being who can look at a series of logs and make an interpretation.  What I want to provide here is some of the heavy lifting that can then be use by a human to determin if there is an issue. 

The purpose of this post is to show people how I grabbed Syslog data from my pix allowing me to grab the URI Stem of all outgoing sessions and log them into a SQL Server. Afterward, I will be able to run key queries to be able to troll for .exe, .dll, .tgz and any other problem extensions. Also, I can upload the latest malware list data and cross reference it with the information in my database which will allow me to see if any of my systems are phoning home to a botnet master, malware distribution site, etc. This is basically a take on my post on monitoring APT with Edgesight.

The first order of business is to get the logs to the syslog server. I start by creating a filter that will grab the logs. (See Below)

The next step is to parse the incoming data into separate columns in my database. This is done by setting up a custom db format for the purpose of these logs. The parse script is provided below:
Also, check all checkboxes below “Read” and “Write”

Parsing Script: (Cut and paste it to a text file then use that text file in the dialog box above)
Function Main()
Main = “OK”
Dim MyMsg
Dim Source
Dim Destination
Dim Payload

With Fields
Source = “”
Destination = “”
Payload = “”    

MyMsg = .VarCleanMessageText

If ( Instr( MyMsg, “%PIX” ) ) Then
SourceBeg = Instr( MyMsg, “: “) + 2
SourceEnd = Instr( SourceBeg, MyMsg, “Accessed”)
Source = Mid( MyMsg, SourceBeg, SourceEnd – SourceBeg)
DSTBeg = Instr( MyMsg, “URL”) + 3
DSTEnd = Instr( DSTBeg, MyMsg, “:”)
Destination = Mid( MyMsg, DSTBeg, DSTEnd – DSTBeg)
End IF    
.VarCustom01 = Source
.VarCustom02 = Destination
.VarCustom03 = Payload

End With
End Function

The last step is to write the data to SQL but first let’s do a few tasks to prepare the table.

  1. Set up an ODBC connection to a SQL Server and create a database called “Syslog” and connect to it with an account that has dbo privilages.
  2. Create the Custom DB Format for grabbing URL’s

Note that this table will have five columns, msgdatetime, msghostname, msgtext, source, destination and payload. (The last column, payload, is not working yet but I will show you how to get the payload later)

3. Once this is done, create an action called “Write to SQL” and select “PIX_URL” from the custom data fromat list and name the table “PIX_URL” then select “Create Table”

Okay, so now that we have the data writing to SQL Server, let’s look at a month’s worth of data on one of my systems:

This query will give you the payload and the number of times the payload has been accessed. Using the having function I am going to ask for every uri-stem that has been accessed more than 5 times in the last month.

select substring(msgtext,41, 2048)as “Payload”, count(substring(msgtext,41, 2048))
from pix_url
group by substring(msgtext,41, 2048)
having count(substring(msgtext,41, 2048)) > 5
order by count(substring(msgtext,41, 2048)) desc 

The idea behind this is that if you note 1000 records to “” you may want to do something about it. You can also download the data, import it into SQL and cross reference that data to ensure that you are not communicating with any noted malware sites. Depending on the response of this blog, I may post those instructions as well.

 And here are what the results look like:

Another query I like to run is one looking for executable files in the URI-stem.

select Msghostname as “Firewall”, Source, Destination, substring(msgtext,41, 2048) as “Payload”
from pix_url
where msgtext like ‘%.exe%’
order by msgdatetime desc

This will allow me to troll for executables that my internal users are accessing, as with most versions of malware, this should show itself early on during the breach.

So how do you monitor?

Well, you don’t have to sit there with query analyzer open all day but you can set up SQL Server Reporting Services to do this chore for you and deliver a dashboard to operations personnel. Here is a quick view of a dashboard that refreshes ever 5 seconds and turns RED when “.exe” is in the URI-Stem. In this scenario, you would be able to investigate the executable that is being downloaded by the client and ensure that it is not malware. You can test this yourself once you set it up by going to any site and typing “/test.exe” at the end.

Again, I am not a traditional security guy so this could be utterly useless, I am not the PIX guy at my job, I AM the PIX guy at home though. Also, I have found it very useful to check for Malware and 0-Day’s that my anti-virus does not pick up. While I cannot speak with as much authority as a number of CISSP’s and INFOSEC guru’s, I can say that the continued ignorance surrounding egress will allow malware to run amuck. As I stated in a previous blog, it is foolish to beat your chest at the millions of packets you keep out while the few that get in can take anything they want, and leave unmolested. Just like a store has to let some people in then focus on ensuring no one leaves with anything they didn’t pay for, IT Security needs to ease over to this mentality and keep track of what is leaving its networks and where it is being sent. At any rate, if this has value to anyone let me know, I will include the RDL (Report File) online for download if anyone wants to set it up. I know a lot of PIX guys aren’t necessarily web/database guys so if you have any questions, feel free to ask.

Thanks for reading,



Project Poindexter: Endpoint Analysis Log Harvesting

About four years ago management wanted to know which users were failing their endpoint analysis scans and to what extent we were compliant with endpoint analysis. We spent over $30K on a product called “Clear2View” and it did some rudimentary scans logging for us but the data was not very easy to query even though it was located in a SQL Database and the reporting features were, in my opinion, only so-so. With that, it appears as though Clear2View has gone away and many of us are left wondering how we will get our EPA Scan data on the new AGEE platform. We have been able to get past this dilemma by harvesting the Syslog Data from the AGEE and parsing it into a SQL Server and then integrating it with Business Intelligence.

As with other “Project Poindexter” posts, we will cover how to grab EPA Scan results from SYSLOG and write them to a SQL Server then report on them at a cost considerably less than $30K.

Kiwi Syslog Server (Full version is $260 bucks)
SQL Server w/Reporting Services (You should already have if you have Edgesight)

Some vbscript or parsing skills, although I will provide the parsing script to you.
The ability to take my SQL Syntax and edit it so that it suites your scans/environment.
The ability to upload an RDL to Reporting Services and map it to a data souce.

So getting started, here is an Example:
So, at home with the VPX and some test vm’s I set up the following scans:

As you can see, I am testing for the McAfee suite(a canned scan) and to see if the Windows Firewall is running.

Results: Here are the results that come into KIWI.

06-26-2010    12:16:05    Local7.Error    06/26/2010:11:41:06 GMT ns PPE-0 : SSLVPN CLISEC_EXP_EVAL 104254 : User wireless: – Client IP – Vserver – Client security expression CLIENT.SVC(MpsSvc) EXISTS evaluated to FALSE(3)

06-26-2010    12:16:05    Local7.Error    06/26/2010:11:41:06 GMT ns PPE-0 : SSLVPN CLISEC_EXP_EVAL 104253 : User wireless: – Client IP – Vserver – Client security expression CLIENT.SVC(MCVSRte).VERSION == 9.0.0 -frequency 5 evaluated to FALSE(3)

06-26-2010    12:16:05    Local7.Error    06/26/2010:11:41:06 GMT ns PPE-0 : SSLVPN CLISEC_EXP_EVAL 104252 : User wireless: – Client IP – Vserver – Client security expression CLIENT.APPLICATION.AV(McafeeVirusScanEnterprise).VERSION == 7.0 -frequency 5 evaluated to FALSE(3)

06-26-2010    12:16:05    Local7.Error    06/26/2010:11:41:06 GMT ns PPE-0 : SSLVPN CLISEC_EXP_EVAL 104251 : User wireless: – Client IP – Vserver – Client security expression CLIENT.APPLICATION.AV(McafeeVirusScan).VERSION == 7.0 -frequency 5 evaluated to FALSE(3)

06-26-2010    12:16:05    Local7.Error    06/26/2010:11:41:06 GMT ns PPE-0 : SSLVPN CLISEC_EXP_EVAL 104250 : User wireless: – Client IP – Vserver – Client security expression CLIENT.APPLICATION.AV(McafeeNetshield).VERSION == 7.0 -frequency 5 evaluated to FALSE(3)

So next let’s take these results and get them parsed then logged to SQL Server:

Create a new Rule called “EPA Scans” and create one filter with three actions.
The First Filter is called “Filter Text – CLISEC” and set it up to filter message text for “CLISEC”
The first Action is “DISPLAY”
The second Action is “Parse Data” (Note Check all the boxes for Read and Write and Browse to the location of the Parsing Script which you can get at and go to the “ACCESS GATEWAY forum)

The third Action is called “Write to SQL” which will require a custom data format so let’s cover those steps:

Custom Data Format:
Create a custom DB Format called EPA_SCANS, it should appear as follows: (Note the Field names AND the data types as they are very important)

Now that you have created your custom DB format go back to your “Write to SQL” action

Make sure that your DNS Connect String is correct and make sure that you name the table EPA_SCANS under database table name and that you use the Custom DB Format EPA_Scans then click on “Create Table”

Once this is done you should be all set, log into your VPN/AGEE Address and look for the results by running a simple SQL Query:

select * from epa_scans
order by msgdatetime desc

You should see something like the following:

Note that in the results I include 7 columns. I always include the entire log in the msgtext column for several reasons, among them Security statutes may dictate that you must have all of the log available and there are instances where parsed logs are not admissible in court. For this endeavor, it is your choice, I have habit of just leaving it in.

Also, my goal of setting up the logging was so that the Service Desk staff could look at the results and tell the end users what the problem is. To deal with that issue let’s take a look at the actual scans:

CLIENT.APPLICATION.AV(McafeeNetshield).VERSION == 7.0 -frequency 5 CLIENT.APPLICATION.AV(McafeeVirusScan).VERSION == 7.0 -frequency 5 CLIENT.APPLICATION.AV(McafeeVirusScanEnterprise).VERSION == 7.0 -frequency 5 CLIENT.SVC(MCVSRte).VERSION == 9.0.0 -frequency 5

As you can see from the scans above, a Level I engineer may not have a very easy time with this so we are going to change our SQL up a little bit so that we have a more friendly description of the scan so that when someone calls the helpdesk saying they cannot get to a resource due to a failed scan, the person on the phone with them can give them a clear explanation of what the issue is.

So let’s shake up our SQL just a little:

select msgdatetime, userid, clientip, scan=
    case Scan
    when ‘CLIENT.SVC(MCVSRte).VERSION == 9.0.0 -frequency 5’ then ‘Antivirus Service Check’
   when ‘CLIENT.APPLICATION.AV(McafeeVirusScanEnterprise).VERSION == 7.0 -frequency 5 ‘ then ‘Antivirus ENT.Version Check’
    when ‘CLIENT.APPLICATION.AV(McafeeVirusScan).VERSION == 7.0 -frequency 5’ then ‘Antivirus Std. Version Check’
    when ‘CLIENT.APPLICATION.AV(McafeeNetshield).VERSION == 7.0 -frequency 5’ then ‘Netshield Version 7 Check’
    when ‘CLIENT.SVC(MpsSvc) EXISTS’ then ‘Check Microsoft Firewall Service’
    from epa_scans
order by msgdatetime desc

WordPress has a habit of placing double quotes on single quotes so it is not likely you can just paste this into your query so I will include this in the Access Gateway area of At any rate note the following:
We are taking the cryptic “
CLIENT.APPLICATION.AV(McafeeVirusScanEnterprise).VERSION == 7.0 -frequency 5″ Text and converting it into a more easily interpreted ‘Antivirus ENT.Version Check’Your SQL Query, and eventually your SQL Reporting services reports will appear as follows:

Also, your SQL Report will appear as follows:

Note that the failures are RED which will alert your staff and also note how much more logical and more intpretable the SCAN information is. You could also rig up a self service by providing a link on the scan sending the user to the place to either innoculate their system or instructions on how to turn on their Microsoft Firewall.

Again all parsing scripts, RDL’s and SQL Queries are located here

Why is this even important:
Well, as the security screw gets tighter and tighter more and more restrictions are going to be placed on both internal and remote access systems. It will be a disaster to deploy endpoint analysis on a large scale without being able to at least give the support staff the ability to tell the users why they did not get access to a resource. We plan on taking this to the next level and providing an HTML Injection rule so that when a user goes straight to Web Interface because they failed a scan, there is a popup button that tells them they failed with a URL to the report above letting them know what scan failed, and eventually, a hyperlink to take them to a remediation page (Be it instructions or updated signatures).

Also, I believe, there never was a Clear2View for the AGEE anyway so those of us with the AGEE version were kind of left out of that game. This process sets you up with all the business intelligence you need to support NAC-like endpoint analysis and also allows you to report on the level of compliance for your company or agency. Oh…and it only costs $260 bucks plus some time (which I understand is expensive)

Obiviously, Citrix will not support this but also, you WILL HAVE to be able to edit the SQL Statement both within the Query Analyzer AND the RDL file otherwiseyour report will not show proper data. You do need to have some SQL proficiency to pull this off but you do not have to be a full fledge DBA. If you are a parnter, this could be a very nice value-add for a customer if you have a few hours left in an engagement. It was not excessively difficult to do.

Also, I don’t run all of the scans that everyone else may or may not run. There may be an instance where a particular scan does not parse properly, if so, shoot me an email and I will see if I can’t figure it out.

As with the VPN Logging, I plan on producing a video walkthru of this entire task. I should have some head down time at the begining of Next month to walk through it.

This literally took 45 minutes to set up once I had the Parsing scripts and my SQL Figured out. If you run into a problem, feel free to shoot me an email.

Thanks for reading


Project Poindexter:VPN Logs

Total Information Awareness with your Netscaler/AGEE

Harvesting VPN Logs with the Netscaler:
When I first heard about Total Information Awareness I was a little concerned. Like a lot of my current team, I am one of those libertarians who really isn’t keen on his personal life being correlated and analyzed and a program that is overseen by unelected officials. That said, as an individual responsible for the security and integrity of information systems as well as a person who’s own personally identifiable information is in the databases of my bank, doctor and employer, I do believe I am entitled to know what is going on and I would like to think the stewards of my information are also informed of what is going on with regards to my own data. For this reason, I decided to start looking into how I could better monitor activity on my Netscaler and I wanted to provide an accompanying guide to my SCIFNET post/video showing how you can compartmentalize sensitive data using the VPX or a regular MPX class Netscaler.

Most engineers are fully aware that the Netscaler platform is capable of sending information to a syslog server. This in and of itself is not that significant as many network/Unix based appliances can syslog. What I want to discuss in this post is how to use a very cheap syslog server to set up a fully functional log consolidation system that includes parsing specific records and writing them to a relational database.

I find a certain amount of frustration with today’s six figure price tag event correlation systems and if you can only respond to a breach by doing “Find and Next” on a 90GB ASCII file, needless to say, that is not the most agile way to respond and not where it needs to be to react to an INFOSEC related incident. As with the Admiral Poindexter’s vision, proper analysis of events can be an instrumental tool in the defense of your information systems.

Below is an example of a typical VPN log from your Netscaler/AGEE appliance:
06/15/2010:05:59:38 ns PPE-0 : SSLVPN HTTPREQUEST 94167 : Context wireless@ – SessionId: 5- User wireless : Group(s) SCIF-NET USERS : Vserver – 06/15/2010:05:59:38 GET /service/getUpdate.xml?clientGUID=01BACADF-CE85-48CD-8270-B8A183C27464&VEOH_GUIDE_AUTH=am1zYXpib3k6MTI3ODAyODkyMTM1NzpyZWdp – –

Using KIWI Syslog server’s parsing capability, I will actually parse this data and write it into a SQL Server database to allow for very easy queries and eventually dashboards showing accountability and key data.

I have had engineers ask me how to get things like Client IP Address and what they have accessed. I will provide a parsing script that will pull from the example above, the following: (As in the case of the log above)

Context: wireless@
Payload: GET /service/getUpdate.xml?clientGUID=01BACADF-CE85-48CD-8270
*I have also included “Assigned_IP” in case any of you assign ip addresses instead of NATing. If you are able to get the Destination of where a user was going, the need to account for every IP Address may become less important but some folks insist on not NATing their users. If so, the parse script will grab their IP’s as well.

And just to show you that I do have the data you can see in the screen print below of the SQL Query:

Uh, John…who cares?
Well, most of the time you really shouldn’t need to do a lot of tracking of where your users are going but in some higher security environments being able to account for where users have gone could be very important. Say you hosted (a site I hate but for the purpose of this lab, their malware…err…client was installed on the laptop I was testing with) and someone said that the system had been compromised. You could immediately obtain every user ID and IP Address that accessed that site and what the payload that they ran against it was. You would see the XSS or SQL Injection string immediately. You would also note a system that had malware and was trying to get in over one of the SMB “Whipping boys” (445, 135-139).

Parsing data vs. just throwing it all into a flat file and waiting for an auditor to ask for it?
As I stated previously, the ability to have your data in a relational database can give you a number of advantages, not just pretty tables and eventually dashboards but you also open the door to the following:

  • Geospatial analysis of incoming IP Addresses (by cross referencing context with geospatial data from or other free geospatial ip-to-location data.
  • An actual count of the number of concurrent users on a system within a block of time including historical reporting and trending.
  • The number of times a “Deny” policy has been tripped and who tripped it. If you are compartmentalizing your data and you want to know who tried to access something they are not allowed to.
  • Your sensitive data is on wiki leaks and you want to know every user who accessed the resource the data resides on, when and what ports they used?
  • And lastly, find out who is going ” \\webserver\c$” to your web server instead of “http://webserver”

So what do I log?
Well, I log basically everything but for VPN I log three different events into two different tables, I log all HTTP based traffic, normal UDP/TCP based connections and I also have a separate table for all of my “DENIED_BY_POLICY” Events.

Here is an example of an HTTPREQUEST log:
06/15/2010:11:59:58 ns PPE-0 : SSLVPN HTTPREQUEST 110352 : Context wireless@ – SessionId: 5- User wireless : Group(s) SCIF-NET USERS : Vserver – 06/15/2010:11:59:58 GET /service/getUpdate.xml?clientGUID=01BACADF-CE85-48CD-8270-B8A183C27464&VEOH_GUIDE_AUTH=am1zYXpib3k6MTI3ODAyODkyMTM1NzpyZWdp – –

Here is an example of TCP/UDPFlow statistics:
06/15/2010:12:18:16 ns PPE-0 : SSLVPN UDPFLOWSTAT 111065 : Context wireless@ – SessionId: 5- User wireless – Client_ip – Nat_ip – Vserver – Source – Destination – Start_time “06/15/2010:12:15:32 ” – End_time “06/15/2010:12:18:16 ” – Duration 00:02:44 – Total_bytes_send 1729 – Total_bytes_recv 0 – Access Allowed – Group(s) “SCIF-NET USERS”

Here is an example of a DENIED_BY_POLICY event: (Over HTTP)
06/15/2010:10:17:14 ns PPE-0 : SSLVPN HTTP_RESOURCEACCESS_DENIED 106151 : Context wireless@ – SessionId: 5- User wireless – Vserver – Total_bytes_send 420 – Remote_host – Denied_url POST /tracker/update.jsp – Denied_by_policy “Problem-Site” – Group(s) “SCIF-NET USERS”

Let’s talk a little about the “DENIED_BY_POLICY” logs

Here is a Scenario: I have a problem website that I do not want any of my users to go to so I create a policy called “Problem-Site” denying access to the IP of the problem site.

For the log above, I parse the following:
Policy: Problem-Site
Payload: POST /tracker/update.jsp

I also log non-http denies as well, these appear like the following:
06/14/2010:21:08:03 ns PPE-0 : SSLVPN NONHTTP_RESOURCEACCESS_DENIED 69761 : Context wireless@ – SessionId: 5- User wireless – Client_ip – Nat_ip “Mapped Ip” – Vserver – Source – Destination – Total_bytes_send 291 – Total_bytes_recv 0 – Denied_by_policy “TOP-SECRET-DENY” – Group(s) “SCIF-NET USERS”

Here is a Scenario: You read a story in “” about some kid who tried to give a bunch of sensitive data to a hacker or even wiki leaks and you are concerned about your own data being accessed without authorization. You want to monitor all attempts to get unauthorized access and you want to note them, or, since they are in SQL Server w/reporting services, create a dashboard that goes RED when a particular policy is tripped.

Another scenario would be to actually monitor successes and note the “Context”, if most users who access data provided by the “TOP-SECRET-ALLOW” policy come from a specific network ID, say and you start seeing access from then you can see if a user ID has been compromised, you can also query and see how often a user accesses data from which IP Addresses. If someone’s account is compromised, it would show up as coming from another IP as it is less likely that they are sitting at the user’s terminal.

In the log above I parse the following:
Destination: (note the :139 indicating an attempt to use SMB)
Payload: (Blank if not HTTP)

Below is an example of Reporting Services dashboard that refreshes every minute:(Note, I have a particular Policy that turns red in this dashboard to alert me of an important breach attempt)

Time Appliance Context Destination Policy Payload
12:37 wireless@ :3389 TOP-SECRET-DENY  
12:37 wireless@ :3389 TOP-SECRET-DENY  
12:37 wireless@ TOP-SECRET-DENY  
12:37 wireless@ TOP-SECRET-DENY
12:37 wireless@ Problem-Site POST /tracker/update.jsp
12:37 wireless@ TOP-SECRET-DENY   


What You need:

  • You need an incumbent SQL Server Environment, you need Reporting Services if you want dashboards (If you have edgesight you should already have this)
  • You need to be able to set up an ODBC Connection, remember if it is a 64-bit server/workstation you need to use the ODBC tool in %Systemroot%\sysWOW64
  • You need to be able to set up a database connection in Reporting Services
  • $245 bucks for a full version of KIWI, if you buy a Netscaler you can afford a full version of KIWI, I will cover several solutions that will make this the best $245 you have ever spent.

How to set it up:
Once you brow beat your cheap boss into spending the $245 on KIWI you perform the following steps:

Go to and download all of the files. (Follow the instructions in the post)

Create a Database called Syslog with a username and password that has DBO privileges and create an ODBC Data Source on the server hosting KIWI for the syslog database and name it syslogd.

After renaming Netscaler.txt to Netscaler.ini go to KIWI and import the ini file.

On each rule, go to the “Write to SQL” Action and click “Create Table”

On each rule, go to the “Parse Data” Action and click “Browse” to upload the parsing script that goes with each rule. (Check all checkboxes under “Read and Write”

Once this is done you will be able to collect a ton of information that is very useful and it beats the hell out of a 90GB ASCII file or just writing everything into a single event correlation system without the ability to query on certain columns. All of the parsing scripts write the entire log to the msgtext column so you still have the original log if there is every any questions. Being able to parse key information in a specific column will give you a considerably higher level of agility when searching for information about a particular user, IP Address, destination or Security Policy.

If there is a worm that is sending a particular payload over http, you are one query away from finding out every infected IP Address. If an auditor asks you how many users have accessed a sensitive server you are a query away from providing that information. I will supplement this post with a video of the entire setup from start to finish on within the next two weeks (Hopefullly).

Also, I tried this in a home based lab (I cannot use my logs from work) so please, if you have any issues getting it to work, let me know so I can set up better instructions. And keep in mind, I have not looked at this with ICAPROXY logs, I am hoping to do that ASAP, there may be a supplement to this that includes a different script and maybe a different table for ICAPROXY logs. I am waiting on an enhancement request before I tackle ICAProxy logs (They will come across as “SSLVPN” but the log does look different than standard VPN logs).

And most importantly, I am not a Developer, I am a poor-man’s DBA and am a marginal scripter at best, if you can write a better parsing script please let me know!!

Thanks for reading

John Smith

Calling all Govies, Seemless ICAProxy with SmartCards and AGEE

With the release of the Web Interface 5.3 version from Citrix we now finally have what appears to be seamless SmartCard Access for AGEE customers who want to maintain their current level of ICAProxy without the need to turn on VPN. This is significant because of the looming compliance with HSPD-12 which is being met by many Federal Agencies through the use of Smart or CAC cards.

What this means?
This means that you can have your end user base authenticate to the Access Gateway with their smart card and they should have all of their applications presented to them in the same manor they have today when they log in with AD credentials.  I just finished testing mine and going through a dry run on my AGEE with the smart card and it works very well. 

What do I need?:
You need to upgrade your Access Gateway Enterprise to 9.2 in addition to installing/upgrading your web interface to 5.3. There are some detailed directions located here:

What I do not like about the solution is the assumption that every Citrix engineer is a Domain Administrator, using the article above you will be required to manually set this up for every AD Computer Object. Well, my farm will be well in excess of 100 servers and since we do not have domain admin access we will need to tie up an AD engineer for an entire day just to get the constrained delegation set up. What I like about this solution, however, is that I do not need to use the middleware. Currently we are using Active Identity as our middleware and it ties up about 30 megs per session on my XenAPP boxes. This, on a scale of 1000’s of users can equate into a sizable hardware savings and may make the time spent on the initial configuration worth it.

There is more to come on this subject this week as I blog from Synergy this week, if you are a Fed and are at Synergy please find me if you have any question, I am a big ugly guy with black glasses. If you have any questions on how we got ours to work please send me an email at and I will call you and we can work through it together. You CAN do this without setting up VPN now and you don’t appear to need ISA Server or have to lose your EPA Scans by setting up an SSL Bridge.  This is great news for a lot of us Feds who have been dealing with the HSPD-12 spector for some time now. 

More to come! Stay tuned this week as I blog from Synergy.

Sorry for the short post, I plan to cover how you can log these users and write their usernames and IP’s into a SQL database for reporting and referencing. 

John Smith

The Digital SCIF: Compartmentalizing Sensitive data with Access Gateway Enterprise Edition (SCIFNET)


A little over six months ago Citrix released the Netscaler VPX virtual appliance and I was immediately thrilled with the potential to create my own virtual lab using XenServer and internal Xen networks on the hypervisor for downstream hosts. What I noticed was that I could locate resources inside a hypervisor’s black network and make them available externally via a VIP or a secure tunnel via a VPN connection. This lead me to believe that a resource that is, for all intents and purposes, off the public internal network can live safely on this network and never be exposed to the corporate network giving administrators another layer to further compartmentalized sensitive data off of an internal network. The compartmentalizing of sensitive data made me think of a military/DOD term called “skiff” or more appropriately Sensitive Compartmentalized Information Facility or a more appropriate acronym, SCIF. With a SCIF, all access, work and manipulation associated with specific sensitive information occurs within the confines of a specific building. What I am proposing is that you can use an Access Gateway Enterprise Edition to grant access to specific resources following this same model providing secure access, accountability and ensure that the only way to get to that data is via a gauntlet of two-factor authentication, application firewalls and endpoint analysis prior to the 2nd level of policy based access to internal resources that are only accessed via this secure tunnel.

SCIFNET: (“skiff-net”)

Placing a VPN in front of resources is not necessarily new, while VPN’s are most commonly used for remote access, there are instances where an administrator will use a VPN to secure a wireless network or to provide secure access to sensitive information. What I will describe in this is the next level where not only access is restricted but how the AGEE can integrate with the existing identity management framework as well as provide extensive logging and policy based access providing a least privileged model on a per resource basis.

Why put my data in a SCIF?

Currently your internal network is protected either by a NATed firewall, internal ACL’s etc. More mature networks have already layered their services by specific networks placing Oracle servers in one Network, Web Servers in another, SQL Servers in still another network, etc. As the security screws get tightened year after year we find that segmenting our services to particular networks may not be enough. Imagine if a database resided on a server that was completely invisible to the internal network that did not even have a default gateway assigned to it? No MAC Address to show up in ARP tables? No ports exposed via a NESSUS/SATAN/SARA scan?

In the “glass-half-empty” world of IT Security there are two types of systems, compromised and being-compromised. In 2004, during a particularly heated security discussion I suggested that the only way we could truly secure our systems was to unplug them from the network. With the SCIFNET solution I am proposing, you create an internal Network on your XenServer or ESX Server that does not reside on the internal network. This means that all communications occurs on the bus of the Hypervisor which has gigabit level speeds available on it.

So your SQL Server and Web Server are living inside a hypervisor with no Default Gateway and no ability to route to your internal Network? Great job…now how do you make it available? Well, in an earlier blog I discussed my time working as a County health inspector and when I inspected a convenient store in a particularly bad neighborhood, the shop owner would open a barred window and ask the customer what they wanted, he would take the money and go and get the merchandise and the entire transaction occurred outside his store. In this scenario, his exposure and risk was limited as the person was never allowed to enter the store and potentially rob him or attempt to leave with merchandise he/she did not pay for. SCIFNET works in a similar fashion where by the user connects to an Access Gateway who has a leg in both Networks but unlike a door, it is more like a barred window granting access to internal resources. But even better than my shop owner, I will log each access, I will account for how long they used the resource and I will log all un-authorized access attempts to this resource as well. By inserting a VPX in front of the resource, I am able to provide barred window access to sensitive resources that includes the highest level of accountability and record keeping.

Barred Window Access:

The Netscaler VPX provides for several secure access solutions to ensure anyone entering the secured network passes several forms of authentication, endpoint analysis and application firewall rules. Through each of these, before they even begin to attempt to access internal resources, they are met with a myriad of rules and scans to ensure they are allowed to even attempt access to sensitive data. While I may locate a resource on an internal Network on my hypervisor, I can offer it to the end user in a variety of ways among them via VPN or via AAA Authentication to a VIP. So while my web-server/db-server combo may exist on a completely invisible network inside a hypervisor, I am able to deliver it by creating a VIP on the VPX and offering that VIP to users on the internal Network. I can add a layer of security by forcing AAA Authentication to that VIP as of version 9.x of the Netscaler. If you need to grant non http access to a server that has either sensitive documents or a back end database you can offer a VPN tunnel into the internal network on the hypervisor. With split tunnel turned off, you can ensure that the client is only able to access internal resources while connected to the VPN and keep any outside connections from getting in.


As with the hardware appliance, the VPX allows for two factor authentication using smart cards(HSPD-12), SecurID, LDAP(AD/NDS/eDirectory) and local Authentication. All AAA logs can be sent to an event correlation engine for parsing and accountability to ensure that access attempts are accounted for and breach attempts can be reported and acted on immediately(Custom solution, email me if you are interested in it). Currently, I tested two factor authentication with AD Credentials and SecurID tokens and have used Smart Cards (CAC) Cards in a single authentication mode without any issues.

Endpoint Analysis:

In addition to authenticating users who wish to access sensitive data, you can also set minimum standards of the systems accessing the data. Using the VPX, you can ensure that systems accessing the SCIF have adequate virus signatures, host based firewalls and encryption software. Using Endpoint Analysis, you can ensure that any system meets a pre-selected set of requirements prior to accessing the systems inside. This will ensure that an infected system or a system that possesses an outdated virus signature is not allowed access. You may also only want a select group of systems accessing the SCIF, by putting a watermark in the registry. By scanning for this specific watermark, you can further restrict the number of systems that are allowed access in addition to the number of users.

Application Firewall:

Not everyone purchases this feature, in fact Citrix does not bundle this with the Express edition of the VPX but you can get a 90 day platinum edition that has it. What the application firewall does is allow your front end SSL VPN solution to be protected by a layer 4-7 firewall. By enforcing a “START URL” rule you can ensure that anyone who attempts to access the system by IP is dropped meaning any worm that is on the loose or person looking for port 443 or port 80 over an IP will not be able to access the authentication page. This same solution provides for Buffer Overflow, SQL Injection, Cross-Site Scripting and Custom based URL filter protection. An individual would need to know the exact URL to connect to before they even get a chance to Authenticate and be scanned.

Accessing Sensitive Resources:


Okay, you have typed in the correct URL, you have all of the necessary virus updates and watermarks to pass endpoint analysis and you have passed the two factor authentication, now you are free to access whatever you want inside the SCIF correct? No, in fact you have only entered the building, now the actual compartmentalized access control begins to take shape. While most SSL VPN Solutions will offer a similar gauntlet to logging in, once you are in the door, you can attempt to get to any IP address thereafter. The 2nd part of this posting has to do with what can be done after you have authenticated to ensure a user just doesn’t wander around the network looking for vulnerable systems. There are 3 parts to setting this up, Active Directory groups, Authorization Policies and the resources themselves.


Resources are defined by IP Address, Network ID and Port. For example, we have a database server that we want to allow a non-web based front end application to connect to. You create an internal Network on the XenServer where you want that resource to go than place the Virtual Machine on the XenServer and assign it to that network. The resource is accessed via the VPX who has a leg in both networks and bridges you from your internal network to the resource. Resources are defined to the AGEE via the Authorization Policy as an IP Address, Network and port. So my SQL Server that I have placed in (Already configured) with an IP Address of will be the resource I grant access to.

Authorization Policies:

This is the hierarchy for setting up access, AD Groups are assigned Authorization policies and Authorization policies have resources instantiated as rules. Using the resource above I would create an Authorization policy called “Sensitive DB” and assign the network ID or IP Address and port to that specific policy. You can assign more than one resource to an authorization policy. Once this is done, you can assign the policy to a group which brings us to the Active Directory integration with the AGEE.

Active Directory Group Extraction:

On the AGEE you will create a group that matches, exactly, the name of the group in Active Directory. This process is LDAP extraction so the same should work for eDirectory/NDS, iPlanet/SunOne and openLDAP. So let’s say for the example above we create an AD Group called “SensativeDB”. I create that exact same group on the Netscaler and so log as the user authenticates via Active Directory, the AGEE will check for matching LDAP groups. By assigning an Authorization Policy to a specific group, you can ensure that your access control to the sensitive information is still managed by the incumbent identity management framework and you also ensure that only users in specific groups are given access to sensitive data. The AGEE will act as the doorman ensuring that no one gets access to any area’s they are not supposed to.

Can I add access to resources outside of the SCIF?

Yes, if an outside resource on a different network needed to be made available to you while you were working inside the SCIF than you could accomplish this using the AGEE by setting up a VIP. If you were connected via VPN to the SCIF network (say and there was some reference data located on another network than you could create a VIP on the network and present external data to the inside with the same security gauntlet that you would present VIP’s to the internal Network. Say you had a group of contractors that you wanted to restrict to a SCIFNET but they also needed access to a web-based time keeping application, you could create an internal VIP and present it to the users inside the SCIF without exposing the entire internal network.

Integrating SCIFNET with VDI:

Initially, I wanted a similar situation as with a SCIF where a person walks into a room and accesses a secure terminal and from there you can access sensitive data on a network. In this manor, I can ensure that the end user is accessing data from what amounts to a glorified dumb terminal. Placing the VDI environment inside the SCIF created some federated services challenges that I have not mastered yet. Namely, you need AD to use XenDesktop and this meant poking a hole to allow for that AD integration. Also, with Endpoint Analysis and the “Barred Window” access offered by AGEE I felt the risk was mitigated. With Split Tunneling off and only allowing VPN traffic once the user connects to the AGEE I felt like we would be pretty safe. Also, you can still use VDI just one on your incumbent internal network instead of inside the SCIF. Otherwise, you need to set up a completely new AD Infrastructure inside the SCIF. I am not well versed enough with ADFS or some of the solutions to be able to adequately address this in this paper.

Can this be done without using a black network or VM’s:

It is likely more experienced readers have already made the connection to this and realized that yes it can be done. For Federal Government Sites, I would recommend putting a Netscaler 9010 with a FIPS module on the Network than set up an entire switched network that is NOT on the internal network but bridged by the AGEE software on the Netscaler. You can still deliver “barred window” access to the physical resources and you do not have the risk of the hypervisor itself becoming compromised. In production, it may be a lot harder to get the VPX based solution approved by security personnel but physically segmenting your resources may be easier to get approved and while I have not seen it in my environment I am quite sure a similar solution currently exists using either PIX or IOS based ACL’s.

Logging and Accountability:

What I like the most about using the AGEE for compartmentalized access is the logging. While a PIX or IOS based ACL will give you an offending IP. Currently, my VPN logs, once parsed and written to SQL, have the userID in addition to the port, source and destination IP Address. This means that I can type the IP Address of a resource into my SQL Reporting Services website and get the date, time, external IP, port and username of every single user who has accessed that resource. Additionally, the AGEE logs policy hits weather they are ALLOWED or DENIED. Once finished parsing, I can, on an hourly, daily or monthly basis check for users who trip the “DENIED” policy. Since I already have the username in my logs, I don’t have to hunt down who had what IP Address. This places me in a position to be more proactive, if I see a large number of ACCESS DENIED logs, I can go in and immediately kill a user’s VPN Session post haste. This also provides the opportunity to log access by user ID. The Digital Epidemiology portion is a whitepaper itself but having a user ID tied to each log makes incident response much faster.


You have a key resource at that must have a blanket “Deny” applied to it and is only available via exclusive “Allows”. For this you can create an Authorization policy called “TopSecret” and you create a rule for DESTIP== with an Action of DENY. You bind this policy to your AD Group and you set it higher than any other policy. This will ensure that if they attempt to get to that server, they will be denied access. What I like about the AGEE logs is that I get a username and the policy that was violated as well as the sourced IP Address. Effective parsing of these log files will allow for you to use event correlation to find out who has attempted to make unauthorized access.

 Example Log file from blocked access:

15:16:39     01/03/2010:20:15:40 GMT ns PPE-0 : SSLVPN NONHTTP_RESOURCEACCESS_DENIED 1250215 : Context jsmith@ – SessionId: 15- User jsmith – Client_ip – Nat_ip “Mapped Ip” – Vserver – Source – Destination – Total_bytes_send 298 – Total_bytes_recv 0 – Denied_by_policy “TopSecret” – Group(s) “CITGO VPN Testers”

While many segmented networks will have PIX logs that will give you this information, what I like about these logs is that I can parse them into a database and put each item marked red into a column for date/time, action, context, policy so in my database a query would return the following:

Time Context Destination Policy Action


jsmith@ TopSecret DENIED


In this scenario, I can immediately ask jsmith why he/she is trying to access this system. I have a record of the breach attempt and can even configure KIWI to alert me via Email at the exact time the breach occurs.

Likewise, with the AGEE I have a record of the successful attempts as well.

17:13:10    01/03/2010:22:12:10 GMT ns PPE-0 : SSLVPN TCPCONNSTAT 1299232 : Context jsmith@ – SessionId: 16- User jsmith – Client_ip – Nat_ip – Vserver – Source – Destination – Start_time “01/03/2010:22:12:10 GMT” – End_time “01/03/2010:22:12:10 GMT” – Duration 00:00:00 – Total_bytes_send 48 – Total_bytes_recv 19 – Total_compressedbytes_send 63 – Total_compressedbytes_recv 39 – Compression_ratio_send 0.00% – Compression_ratio_recv 0.00% – Access Allowed – Group(s) “CITGO VPN Testers”

Note that you do not get a policy named with the log, however all Deny’s should have the policy that denied them.


I plan to include some videos on how to accomplish this, it is relatively simple. This is also not a new concept and networks use IOS based ACL’s to accomplish this but I believe the AGEE be it as a Virtual appliance or physical hardware, provides a much easier solution than an enterprise NAC endeavor. In fact, I have heard some horror stories regarding NAC deployments. In the interim, while NAC continues to mature and organizations ease into their NAC solutions, SCIFNet allows you to perform the same security levels without taunting specter of an enterprise NAC deployment. Compartmentalize sensitive data and place an AGEE in front of it and you have all of the same benefits of Network Access Control at a fraction of the price and overhead.

 To see a video of SCIFNET put to use with a VPX and an internal XenServer Network click here:

Thanks for reading


Edgeisight Under the Hood: Part 2 (Will be moved to

Okay, so in this blog posting I want to continue covering a few more views in Edgesight that I like to run ad hoc queries against.  Today’s view is called   vw_es_archive_application_network_performance.  This view provides information network delay, server delay, xenapp server, process name and downstream hosts that your XenApp servers communicate with.  I have used this table to check delays of the executables such as winlogon.exe to check delay between this process and our domain controllers.  I will cover checking delays by process name, xen_app server and downstream host.  

 The first part will be to demonstrate how to find Network and Server delay of specific downstream hosts as well as how to measure the average XenAPP Servers delay.  Then in the second part I want to answer one of the questions from the first posting.  

 Down Stream Delay:
I actually got to present on Edgesight during Synergy 2008 and one of the key points that I tried to drive home is how Edgesight helps you with the never ending B.S. Witch hunts that always seem to occur when someone’s application is “running slow on Citrix”.  I would say that less than 30 % of what I actually investigate ends up being an actual XenAPP issue.  I will go over a few ad hoc queries that will give you the average delay of your down stream hosts and will give you the average delay experienced by each XenAPP Server allowing you to see if you have a specific XenAPP box that may be having some issues.   

The first ad hoc query has to do with downstream hosts, this will return the downstream host and the Network/Server delay.  I have set this query to filter any downstream host that does not have at least 100 records and a server delay of at least 300 miliseconds.  You can edit/remove the “Having” clause to suit your environment.        

select distinct hostname, sum(network_delay_sum)/sum(record_count) as “Network Delay”, sum(server_delay_sum)/sum(record_count) as “Server Delay”
from vw_es_archive_application_network_performance
group by hostname
having sum(record_count) > 100
and sum(server_delay_sum)/sum(record_count) > 300
order by sum(server_delay_sum)/sum(record_count) desc 


In English: “Give me the Network and Server delay of every downstream host that has at least 100 records (packets?) and a server latency of at least 300ms” 

 XenAPP Server Delay: 
It is a good idea to monitor your XenAPP Server delay, this will tell you if there is a particular XenAPP Server that is having a layer 1 or layer 2 issue.  This is a quick query that will show you the average delay of your XenAPP Servers.   

select distinct machine_name, sum(network_delay_sum)/sum(record_count) as “Network Delay”, sum(server_delay_sum)/sum(record_count) as “Server Delay”
from vw_es_archive_application_network_performance
group by machine_name
order by sum(server_delay_sum)/sum(record_count) desc  


Note: You will also see “Edgesight for Endpoints” client data in this table as well.  


Executable  Delay:
This query shows the delay associated  individual executables.  You may check outlook.exe to see if you have a delay in a downstream Exchange server or, in my case, check winlogon.exe for delays to domain controllers.  

 select distinct exe_name, sum(network_delay_sum)/sum(record_count) as “Network Delay”, sum(server_delay_sum)/sum(record_count) as “Server Delay”
from vw_es_archive_application_network_performance
group by exe_name
order by sum(server_delay_sum)/sum(record_count) desc  

Session Statistics:
Last week I got a a question about session counts and I wanted to answer it in this post, here was the question: 

 “I’m looking for a custom report showing the application usage (Published Apps, not processes) on a hourly, daily and monthly base and a custom report showing the concurrent sessions on a hourly, daily and monthly base.”  

The view I used for this was vw_ctrx_archive_client_start_perf declare @end varchar
declare @today datetime
declare @app varchar
set @today = convert(varchar,getdate(),111)
set @begin = ’00’
set @end = ’23’
set @app = ‘%Outlook%’
select convert(varchar(2),dateadd(hh,-4,time_stamp), 108)+’:00′ as “Time”, count(distinct sessid)
from vw_ctrx_archive_client_start_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp), 111) = @today-1
and published_application like ‘%’+@app+’%’
group by convert(varchar(2),dateadd(hh,-4,time_stamp), 108)+’:00′
order by convert(varchar(2),dateadd(hh,-4,time_stamp), 108)+’:00′ 

 In English: Give me every application on an hourly basis for a specific application.  On this report substitute %APPNAME% for whichever app you want to see.  Note that this is an hourly report so the time format is set to 108.   

 Daily Application Usage:
In the same view I change the query above just a little to accommodate a query by day.

declare @begin varchar
declare @end varchar
declare @today datetime
declare @app varchar
set @today = convert(varchar,getdate(),111)
set @app = ‘%Outlook%’
select convert(varchar(10),dateadd(hh,-4,time_stamp), 111) as “Date”, count(distinct sessid)
from vw_ctrx_archive_client_start_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp), 111) > @today-30
and published_application like ‘%’+@app+’%’
group by convert(varchar(10),dateadd(hh,-4,time_stamp), 111)
order by convert(varchar(10),dateadd(hh,-4,time_stamp), 111)

 Monthly Application Usage:
Depending on how long you have your retention set (min is 30 days) this query may or may not work for you but this is the number of unique sessions per application for a month.

declare @begin varchar
declare @end varchar
declare @today datetime
declare @app varchar
set @today = convert(varchar,getdate(),111)
set @app = ‘%Outlook%’
select convert(varchar(7),dateadd(hh,-4,time_stamp), 111) as “Date”, count(distinct sessid)
from vw_ctrx_archive_client_start_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp), 111) > @today-30
and published_application like ‘%’+@app+’%’
group by convert(varchar(7),dateadd(hh,-4,time_stamp), 111)
order by convert(varchar(7),dateadd(hh,-4,time_stamp), 111)

Application Matrix:
SQL Server Reporting Services will let you create a matrix, these two queries are for daily and monthly which will let you sort as follows:

  Date 1 Date2 Date3 Date4 Date5
Outlook Count1 Count2 Count3 Count4 Count5
Word Count1 Count2 Count3 Count4 Count5
Oracle Financials Count1 Count2 Count3 Count4 Count5
Statistical APP Count1 Count2 Count3 Count4 Count5
Custom APP-A Count1 Count2 Count3 Count4 Count5


  This has been the report method that has made my management the happiest so I use the Matrix tool with SSRS as often as possible.  Remember, if you have Edgesight, you have SSRS and setting up reports is no harder than an Access Database.

Here are the queries


First The Daily Matrix:

declare @begin varchar
declare @end varchar
declare @today datetime
declare @app varchar
set @today = convert(varchar,getdate(),111)
select convert(varchar(10),dateadd(hh,-4,time_stamp), 111) as “Date”, published_application, count(distinct sessid)
from vw_ctrx_archive_client_start_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp), 111) > @today-30
group by convert(varchar(10),dateadd(hh,-4,time_stamp), 111), published_application
order by convert(varchar(10),dateadd(hh,-4,time_stamp), 111), count(distinct sessid) desc 

Then the Monthly Matrix:
declare @today datetime
set @today = convert(varchar,getdate(),111)
select convert(varchar(7),dateadd(hh,-4,time_stamp), 111) as “Date”, published_application, count(distinct sessid)
from vw_ctrx_archive_client_start_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp), 111) > @today-30
group by convert(varchar(7),dateadd(hh,-4,time_stamp), 111), published_application
order by convert(varchar(7),dateadd(hh,-4,time_stamp), 111), count(distinct sessid) desc 

 Concurrent Session Statistics:
A colleague of mine, Alain Assaf, set up a system that gives you this info every five minutes and is almost in real time, go to to see it.  Keep in mind that Edgesight is not real time data so if you set up a private dashboard for it, you may have to wait for it to refresh. 

The vw_ctrx_archive_client_start_perf view appears to give us only start times of specific published applications.  Perhaps the most used view of any of my reports is vw_ctrx_archive_ica_roundtrip_perf.  For this set of queries, I will count concurrent sessions but I will also go into ICA Delay’s for clients in my last post on Edgesight Under the Hood:

I will try to answer the users question on concurrent sessions with three pretty basic queries for hourly, daily and monthly usage:

Hourly Users:
declare @begin varchar
declare @end varchar
declare @today datetime
declare @app varchar
set @today = convert(varchar,getdate(),111)
set @begin = ’00’
set @end = ’23’
select convert(varchar(2),dateadd(hh,-4,time_stamp), 108)+’:00′ as “Time”, count(distinct [user])
from vw_ctrx_archive_ica_roundtrip_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp), 111) = @today-3
group by convert(varchar(2),dateadd(hh,-4,time_stamp), 108)+’:00′
order by convert(varchar(2),dateadd(hh,-4,time_stamp), 108)+’:00′


Daily Users:
declare @begin varchar
declare @end varchar
declare @today datetime
declare @app varchar
set @today = convert(varchar,getdate(),111)
select convert(varchar(10),dateadd(hh,-4,time_stamp), 111) as “Date”, count(distinct [user])
from vw_ctrx_archive_ica_roundtrip_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp), 111) > @today-30
group by convert(varchar(10),dateadd(hh,-4,time_stamp), 111)
order by convert(varchar(10),dateadd(hh,-4,time_stamp), 111) 

 Monthly Users:

declare @begin varchar
declare @end varchar
declare @today datetime
declare @app varchar
set @today = convert(varchar,getdate(),111)
select convert(varchar(7),dateadd(hh,-4,time_stamp), 111) as “Date”, count(distinct [user])
from vw_ctrx_archive_ica_roundtrip_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp), 111) > @today-30
group by convert(varchar(7),dateadd(hh,-4,time_stamp), 111)
order by convert(varchar(7),dateadd(hh,-4,time_stamp), 111)  

For the most part, I have vetted all of these queries, you may get varying results, if so, check for payload errors, licensing, etc.  I would really like to see some better documentation on the data model, most of these were basically done by running the query and checking it against the EdgeSight canned reports to see if my SWAG about how they did their calculations was correct.  All of the queries I ran here I checked and looked to be accurate.  If you are going to bet the farm on any of these queries to the brass in your organization, vet my numbers….

My next post will deal with ICA latency and delay issues for individual users and servers.

Thanks for reading!



Digital Epidemiology: Edgesight Under the hood (Will be moved to

Okay, so no flat files, parsing or kiwi syslogging today.  Today I want to talk about Edgesight 5.x.  If any of you have attempted to reverse engineer Edgesight yet you have probably noticed that the tables are a lost cause.  All of the key data that you will want to try to harvest is located in the “Views”.  I Want to do a few blog posts on each of my favorite views and how you can pull statistics from them instantly via query analyzer.  I will start by saying Citrix has created an outstanding front end delivered via the web interface.  I am in no way knocking that interface, there are just times when the canned reports just don’t do it for you.  Until the engineers at Citrix get their hands on a Crystal Ball, there will always be a use for good ole-fashion ad hoc queries.  I am going to go over a few key queries from the vw_ctrx_archive_ica_roundtrip_perf  view from your Edgesight Database and how you can open query analyzer and gather these statistics post haste, or, if you are adept with Reporting Services, set up reports for yourself.  I have pitched to the Synergy 2010 group that they let me host a breakout covering how to integrate some of what I do with SQL Server Reporting Services, I think I can cover a lot in a 90 minute session and let engineers take something away from the session that they can use in their own environments. So, as I stated, the view of the day is  “vw_ctrx_archive_ica_roundtrip_perf” so open your SQL Server Management Studio and log into the SQL Server hosting your database with an account that has “Datareader” privilages.  If you admin account does not work, your Edgesight service account will likely suffice if your organization allows services accounts to be  used in that mannor.          

The @Today variable is for the existing day.  That means that if you want to check between yesterday and the day before you would change “convert(varchar(10),dateadd(hh,-4,time_stamp),111) > @today-2” to “convert(varchar(10),dateadd(hh,-4,time_stamp),111) between @today-2 and @today-1″      

 Find the number of ICA SEssions by server by time of day
About this query: 
In this query we declare 3 variables, two of which you must edit.  The @begin and @end variables must have the time of day that you want to search.  So, if you wanted to know the number of unique users for each server between 8AM and 2PM, you would enter ’08’ for @begin and ’14’ for @end.           

declare @begin varchar
declare @end varchar
declare @today datetime
set @today = convert(varchar,getdate(),111)
set @begin = ’14’
set @end = ’23’
select machine_name, count(distinct [user])
from vw_ctrx_archive_ica_roundtrip_perf
whereconvert(varchar(2),dateadd(hh,-4,time_stamp),108) between @begin and @end
and convert(varchar(10),dateadd(hh,-4,time_stamp),111) > @today2
group by machine_name
order by count(distinct [user]) desc

 Find ICA Lantency by user by day
About this query:
This query will show you the ICA Latency for each user and sort it by the user with the worst latency.  If you wanted to check sessions on a specific server, you would add the following above the “Group By” statement:  ‘and machine_name = ‘%netbiosNameOfXenAPPServer%’

 declare @today datetime
set @today = convert(varchar,getdate(),111)
select [user], sum(network_latency_sum)/sum(network_latency_cnt) as “Latency”
from vw_ctrx_archive_ica_roundtrip_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp),111) > @today1
group by [user]
order by sum(network_latency_sum)/sum(network_latency_cnt) desc          


ICA Latency by Server: 
About this query:
This query will show you the latency by server for a given day.  This can be handy if you want to keep tabs on server health.  If you note high latency for a particular server for a specific day you may need to look and see if there was a user connection that skewed the results or if all sessions on that server had issues. 

 declare @today datetime
set @today = convert(varchar,getdate(),111)
select machine_name, sum(network_latency_sum)/sum(network_latency_cnt) as “Latency”
from vw_ctrx_archive_ica_roundtrip_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp),111) > @today1
group by machine_name
order by sum(network_latency_sum)/sum(network_latency_cnt) desc

Find total sessions by server by farm:
About this Query:
If you have more than one farm, than you can specify the farm name in this query to get the number of connections per server by farm name.  For users in very large environments with multiple farms may find it handy to query by farm name.  

 declare @today datetime
set @today = convert(varchar,getdate(),111)
select machine_name, count(distinct [user])
from vw_ctrx_archive_ica_roundtrip_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp),111)= @today3
and xen_farm_name = ‘%FarmName%’
group by machine_name
order by count(distinct [user]) desc

There are at least four views that I like to work with directly, I also integrate all of my queries, including the variables, into SQL Server Reporting Services letting me customize my reports for my specific needs.  The eventual goal is to provide our operations and support team with a proactive list of users with high latency so that we can call them and let them know that we noticed they were having issues.  My next post will cover how to look at problematic downstream hosts that cause you to get a bunch of calls saying it’s Citrix’s fault!!  I apologize for the lack of examples, I am limited to w hat I can show in my environment.  As I stated, I am hoping to show all of this integration, including custom SQL Reports,  at Synergy 2010.         

 If you have a specific query that you want, post it as a comment and I will reply with the SQL Query that gets you as close as I can.       

Thanks for reading!        


Xen and the art of Digital Epidemiology

In 2003 I started steering my career toward Citrix/VMWare/Virtualization and at the time, aside from being laughed at for running this fledgling product called ESX Server 1.51, most of my environment was Windows based. There were plenty of shrink-wrapped tools to let me consolidate my events and the only Unix I had to worry about was the Linux Kernel on the ESX Server. Now my environment has included a series of new regulatory framework (Sarbanes, CISP, and currently FIPS 140-2). What used to be a Secure Gateway with a single web interface server and my back end XenAPP farm now includes a Gartner leading VPN Appliance, Access Gateway Enterprise Edition, Load balanced(GSLB) web interface servers, an application firewall and XenApp servers hosted on Linux based XenServer and VMWare. So now, when I hear, “A user called and said their XenAPP Session was laggy where the hell do I begin? How do I get a holistic vision of all of the security, performance and stability issues that could come up in this new environment.

As a security engineer in 2004, I started calling event correlation digital epidemiology. Epidemiology is defined as the branch of medicine dealing with the incidence and prevalence of disease in large populations and with detection of the source and cause of epidemics of infectious disease”

I think that this same principal can be applied to system errors, computer based viruses and overall trends. At the root of this is the ability to collate logs from heterogeneous sources into one centralized database. During this series, I hope to go over how to do this without going to your boss and asking for half a million dollars for an event correlation package.

I currently perform the following with a $245 copy of KIWI Syslog Server:(Integrated with SQL Server Reporting Services)

  • Log all Application Firewall Alerts to a SQL Server and present them via an Operations dashboard This includes violation (SQL Injection, XSS, etc), Offending IP and Time of day.
  • Pull STA Logs and provide a dashboard matrix with the number of users, total number of helpdesk calls, percentage of calls (over 2.5% means we have a problem) and the last ten calls (Our operations staff can see that “PROTOCOL DRIVER ERROR” and react before we start getting calls. )
  • I am alerted when key VIP Personnel are having trouble with their SecurID or AD Credentials.
  • I can track the prevalence of any error, I can tell when it started and how often it occurs.
  • My service desk has a tracker application that they can consult when a user cannot connect telling them if their account is locked out, Key fob is expired or if they just fat fingered their password. This has turned a 20 minute call into a 3 minute call.
  • I have a dashboard that tells me the “QFARM /Load” data for every server refreshing every 5 minutes and it turns Yellow at 7500 and red at 8500 letting us know when a server may be about to waffle.

For this part of Digital Epidemiologist series I will go over parsing and logging STA Logs, why it was important to me and what you can do with them after getting them into a SQL Server.


A few y ears ago, I was asked “What is the current number of external vs internal users”. This involved a very long, complicated query against RMSummaryDatabase that worked okay but was time consuming. One thing we did realize was that every user who accessed our platform externally came through our CAG/AGEE. This meant that they were issued a ticket by the STA Servers. So we configured logging on the STA Servers and realized a few more things. We also got the application that they launched as well as the IP Address of the server they logged into. So now, if a user says they had a bad Citrix experience, we know where they logged in and what applications they used. While Edgesight does most of our user experience troubleshooting for us, it does not upload in real-time and our STA Solution does. We know right then and there.

By integrating this with SQL Server Reporting Services, we have a poor man’s Thomas Koetzing solution where we can search the utilization of certain applications, users and servers.

For this post we will learn how to set up STA Logging, how to use EPILOG from Intersect Alliance to write the data to a KIWI Syslog Server and then we will learn how to parse and write that to a SQL Server and use some of the queries I have included to gain valuable data that can eventually be used in a SQL Server Reporting Services report.

Setting up STA Logging:

Go to %systemroot%\program files\Citrix\system32 and add the following to the ctxsta.config file:

MaxLogSize=55 (Make sure this size is sufficient).

LogDir=W:\Program Files\Citrix\logs\

In the LogDir folder you will note that the log files created will be named sta2009MMDD.log

What exactly is in the logs:
The logs will show up in the following format: (We are interested in the items in bold where a parse script will pipe them into a database for us. )

INFORMATION 2009/11/22:22:29:32 CSG1305 Request Ticket – Successful. ED0C6898ECA0064389FDD6ABE49A03B9 V4 CGPAddress = Refreshable = false XData = <?xml version=”1.0″?><!–DOCTYPE CtxConnInfoProtocol SYSTEM “CtxConnInfo.dtd”–><CtxConnInfo version=”1.0″><ServerAddress></ServerAddress><UserName>JSMITH</UserName><UserDomain>cdc</UserDomain><ApplicationName>Outlook 2007</ApplicationName><Protocol>ICA</Protocol></CtxConnInfo> ICAAddress =

Okay, so I have logs in a flat file….big deal!

The next step involves integrating them with a free open source product called “Epilog” by this totally kick ass company called intersect alliance ( We will configure epilog to send these flat files to a KIWI syslog server.

So we will go to the Intersect Alliance Download site to get epilog and run through the installation process. Once that is completed you will want to configure your epilog agent to “tail-and-send” your STA Log Files. We will do this by telling it where to get the log file and who to send it to.

After the installation go to START->Programs->Intersect Alliance-> Snare/Epilog for Windows

Under “LOG CONFIGURATION” For STA logs we will use the log type of “Generic” and we will type in the location of the log files and we will tell Epilog to use the format of STA20%-*.log

After configuring the location of logs and type of logs you will want to go to “Network Configuration” and type in the IP Address of your Syslog Server and select port 514 (Syslog users UDP 514).

Once done, go to “Latest Events” and see if you see your syslog data there.


I assume that most Citrix engineers have access to a SQL Server and since Epilog is free, the only thing in this solution that costs money is KIWI Syslog Server. A whopping $245 in fact. Over the years a number of event correlation solutions have come along, in fact I was at one company where we spent over $600K on a solution that had a nice dashboard and logged files to a flat file database (WTF? Are you kidding me?!). The KIWI Syslog Server will allow you to set up ten custom database connectors and that should be plenty for any CItrix administrator who is integrating XenServer, XenAPP/Windows servers, Netscaler/AGEE, CAG 2000 and Application firewall logs into one centralized database. While you need to have some intermediate SQL Skills, you do not need to be a superstar and the benefits of digital epidemiology are enormous. My hope is to continue blog posts on how I use this solution and hopefully you will see benefits beyond looking at your STA logs.

The first thing we need to do is add a rule called “STA-Logs” and filter for strings that will let KIWI know that the syslog update is an STA Log. We do so by adding two filters. The first one is stating “GenericLog”

The second filter is “<Username>”. The two of these filters will match STA syslog messages.

Now that we have created our filters, it’s time to perform actions. There are two actions we want to perform. We want to parse the script (pull all of the data that was bolded from the log text above) and write that data to a table in a database. You add actions by right-clicking action and selecting “Add Action”

So our first “Action” is to set up a “Run Script” action. I have named mine “Parse Script”.

Here is the script I use to parse the data (Thank you Mark Schill ( for showing me how to do this.)

The Script: (This will scrub the raw data into the parts you want, click “Edit Script” and paste).

Function Main()

Main = “OK”

Dim MyMsg

Dim Status

Dim UserName

Dim Application

Dim ServerIP

With Fields

Status = “”

UserName = “”

Application = “”

ServerIP = “”    

MyMsg = .VarCleanMessageText

If ( Instr( MyMsg, “CtxConnInfo.dtd” ) ) Then

Status = “Successful”

UserBeg = Instr( MyMsg, “<UserName>”) + 10

UserEnd = Instr( UserBeg, MyMsg, “<“)

UserName = Mid( MyMsg, UserBeg, UserEnd – UserBeg)

AppBeg = Instr( MyMsg, “<ApplicationName>”) + 17

AppEnd = Instr( AppBeg, MyMsg, “<“)

Application = Mid( MyMsg, AppBeg, AppEnd – AppBeg)


SrvBeg = Instr( MyMsg, “<ServerAddress>”) + 15

SrvEnd = Instr( SrvBeg, MyMsg, “</”)

ServerIP = Mid( MyMsg, SrvBeg, SrvEnd – SrvBeg)

End If

.VarCustom01 = Status

.VarCustom02 = UserName

.VarCustom03 = Application

.VarCustom04 = ServerIP

End With


Now that we can parse the data we need to create a table in a database with the appropriate columns.

The next step is to create the field format and create the table. Make sure the account in the connect string has DBO privileges to the database. Set up the custom field format with the following fields. Ensure that the type is SQL Database.

As you see below, you will need to set up an ODBC Connection for your Syslog Database and you will need to provide a connect string here (yes…in clear text so make sure you know who can log onto the syslog server). When you are all set click “Create Table” and click “Apply”

Hopefully once this is done, you will start filling up your table with STA Log entries with the data from the parse script.

I have included some helpful queries that have been very useful to me: You may also want to integrate this data with SQL Server Reporting Services and with that, you can build a poor man’s Thomas Koetzing tool.

Helpful SQL Queries: (Edit @BEG and @END values)


How many users for each day:(Unique users per day)

declare @BEG datetime
declare @END datetime
set @BEG = ‘2009-11-01’
set @END = ‘2009-11-30’
select convert(varchar(10),msgdatetime, 111), count(distinct username)
from sta_logs
where msgdatetime between @beg and @end
group by convert(varchar(10),msgdatetime, 111)
order by convert(varchar(10),msgdatetime, 111)

Top 100 Applications for this month:

declare @BEG datetime
declare @END datetime
set @BEG = ‘2009-11-01’
set @END = ‘2009-11-30’
select top 100 [application], count(application)
from sta_logs
where msgdatetime between @beg and @end
group by application
order by count(application) desc

Usage by the hour: (Unique users for each hour)

declare @BEG datetime
declare @END datetime
set @BEG = ‘2009-11-01’
set @END = ‘2009-11-02′
select convert(varchar(2),msgdatetime,108)+’:00′, count(distinct username)
from sta_logs
where msgdatetime between @beg and @end
group by convert(varchar(2),msgdatetime,108)+’:00′
order by convert(varchar(2),msgdatetime,108)+’:00′

Will that be Paper or Panic?

According to the New  York Times, 8/10 doctors still use paper record keeping.  As I stated in an earlier blog, the stimulus package will spend a “ga-jillion” dollars on converting paper records to electronic medical records. cited in an article in 2007 that “a key tenet of HIPAA’s data privacy and security requirements is a need for data access accountability, i.e. the ability to understand ‘who is doing what to which data and by what means?’ “ 
In my previous post I talked about how one could secure personally identifiable information by placing the data behind the Netscaler Application Firewall to block or “X” out Social Security Numbers and Phone Numbers.  In this post I will discuss a new feature in the Netscaler 9 product called AAA Traffic Management. This new feature will allow you to impose Authentication, Accountability and Authorization on downstream data that may be on servers that do not live within your AD Domain infrastructure. Regardless of what platform the content lives on and which identity management system they are using, you can force users to authenticate and have their access logged meeting several regulatory rules and ensuring the ability to see “who’s doing what to which data”.

 Deployment Scenarios:

 Scenario 1:
The incumbent identity management solution for Company A, a publicly held company on the NYSE, is Active Directory.  They recently acquired another company who was not public and not subject to the regulatory framework that Company A is and lacks any security measures on key data that now must be secured.  To make matters worse, much of their data resides on an OS390 that has a 3rd party web server.


  • You can quickly make this data available by creating a service on the Netscaler that maps to the OS390 web server. 
  • When you create the VIP to present the data, enable authentication and bind a AAA Traffic Management VIP.
  • Create an LDAP Authentication policy that leverages your existing AD Domain Controllers. 

Now when users connect to the VIP on the Netscaler they are redirected to the Authentication VIP and forced to log in with the domain credentials.  This will help limit the number of logins that they have as well as the amount of RACF administration that needs to be done.  Also, the Netscaler will syslog all access to this data. 

Scenario 2:

You ARE a local doctor who is moving to electronic data by scanning files into a database and making them available via a PDF archive.  You are bound by HIPAA to account for every single person who looks at that data.  You place the PDF’s on a web server, index them and allow end users to access them but cannot report on who accessed what PDF archives.


  • Again, we deliver the web server via a VIP on the Netscaler and enable authentication
  • Ensure that everyone who accesses the data has to provide one or two-factor authentication

Now every binary file, including the PDF’s, that are accessed is logged into the syslog database or event correlation engine. 

Scenario 3:

You a web server in the DMZ that has a few corporate presentations that you want your staff to be able to access but you do not want to be available to the general public.  Since the system is in the DMZ you cannot provide AD Authentication but you want to account for everyone who accesses the presentations and you do not want to use an impersonation account or replicate your existing AD Database with ADAM or DirXML.


  • Yet again, place the presentations behind a Netscaler and create a VIP to present web server housing the presentations.
  • Create an authentication policy using Secure LDAP over TCP 636. 
  • Set up an ACL allowing the NSIP to traverse the firewall to a domain controller (or in my case, a VIP consuming several domain controllers)
  • Bind the authentication policy to an Authentication VIP. 
  • Configure the VIP for the presentations to use the FQDN of the Authentication VIP.

Scenario 4:

You are a CRM vendor like envala or Sales Logix and you have a customer who wants to access their Customer database hosted using SaaS (Cloud Computing).  They would like users to log in against their LDAP server to access the CRM data so that identity management can be handled on their end.  That way if a salesman leaves they can disable their account with out the fear of them logging into their CRM database and stealing leads or the delay in removing that account while the create a support ticket.  Also, since they are consuming this as a SaaS solution, they want you to provide logs of who accessed the system.     


  • Have them make their AD Domain Controller available securely via LDAPs on TCP 636 or they could also use a netscaler to provide a VIP that brokers to the same domain controller.  They can also set up an ACL allowing your NSIP to traverse their firewall for authentication. 
  • Create an authentication policy using Secure LDAP over TCP 636 and point the Server to the customer’s LDAP server. 
  • Set up an Authentication VIP assigning the policy you created for the customer to ensure that it consumes the appropriate LDAP server. 
  • Create a VIP on the Netscaler that front-ends their CRM website.
  • Configure the VIP for the presentations to use the FQDN of the customer’s Authentication VIP.

 Figure A: (Shows external users being redirected to an an external Authentication source via Policy) 



As I stated previously, my experience with HIPAA is limited and much of the accountability has been accommodated by back end database programming, including down to the actual record.  However; as the security screws become tighter and tighter, on a collision course is continued access of data with “IUSR_” or “apache” accounts and the mandate(s) for accountability and the demand to be able to report on who accessed what.  I believe that the AAA Traffic management feature provides a great tool enabling you to impose your identity management solution to any web based content regardless of platform.  Additionally, you get the ability to perform endpoint analysis on incoming clients who can be interrogated for specific registry entries, services and files that can be hidden in a system to ensure that only certain computer systems can access certain files.  Having been part of a paper-to-electronic transition that did not go so well several years ago, I can attest that having tools that can bridge the regulatory gap between legacy systems and today’s heavily gaurded environments will make life a lot easier.

See this technology in action at

Electronic Stimulus

According to the Baltimore Sun, President Obama has promised to spend $50 billion dollars over the next five years coax hospitals, medical centers and the like to begin the process of offering electronic data.  So nurses, occupational therapist and other allied health personnel as well as Doctors may be carrying something like a Kindle around instead of a clip board.  With this comes an exstension of their existing regulatory framework such as HIPPA, CISP (as no one gets away from a visit to the Doctor without putting the plastic down these days) and future restrictions that will be put in place as a result of pressure from Libertarians and ACLU members. 

Ensuring that none of my personally identifiable information is left on someone’s screen while they walk away from their PC is a very big concern.  As these systems are brought online, ensuring that the data is protected, not so much from hackers, but also from basic behavioral mistakes that could result in someone leaning over a counter and getting my date of birth, social security number and credit card number.

While my security experience is very limited with HIPPA I can say that keeping this information hidden from the wrong eyes is a basic function of any security endeavor.  How vendors, System Integrators and IT personnel can best bridge this gap could have a direct correlation on how successful they are in this space.  How much of that $50 billion over five years will go to IBM? EDS/HP? Perot Systems?  What have you done to show these Systems Integrators as well as smaller partners how your product will help them meet this challenge and how will you deal with a security screw that seems to only get tightened?  Fact is, there are millions and millions of medical documents, and finding out which parts of which documents contain sensitive data is virtually impossible.  One solution is to pattern-match the data and block it so that it is not visible to the wrong people.  You could do this with a DBA who ran ad hoc queries to match the data and replace it with an “X” but then someone in billing may need that data (keep two copies?) not to mention the staggering cost (Y2K Part 2?).  The best way I can think of is to place the data behind a device that can capture the patters in the header and “X” the data out in real time.  Enter the Netscaler Platinum that will not only add compression, authentication, caching and business continuity, but will keep the wrong people from seeing the wrong data.  I am not sure when the money will start flowing but as I understand it, some hospitals having as much as $1.5 million dangled in front of them to meet this challenge.      

In this lab, I present how I used the Netscaler Platinum Application Firewall feature to secure personally identifiable data with a rule called “Safe Object” as well as how to deal with a zero day worm/virus using the “Deny URL” Rule.  This “Safe Object” feature, when coupled with the Netscaler policy engine, will allow you the flexibility to ensure that certain job types (Nurses, Doctors, etc) based on either login (setting authentication on the VIP) or subnet; do not see things like Social Security Numbers, Credit Cards and other sensitive data.  While at the same time, ensuring that information is available to billing and accounts receivable personnel. 


For this lab, I used a basic Dell 1950 G6 with a virtualized Netscaler VPX that functioned as a VPN allowing me to establish a secure tunnel to the sensitive data on a non-wired network that resided on that server.  An Apache server on the non-wired network with bogus phone numbers and social security numbers was used as the back end web server.  Again, in a real world scenario, you could either hypervise your web server and place it on a non-wired network as covered in my “VPX Beyond the lab” blog or you could ACL off your web server so that only the MIP/SNIP of the Netscaler was allowed to access your web content. 

See the lab here: