Jan 262007
 

After recieving a few requests from a few people from this post I finally got time to create the proptotype.

I first started by creating a property schema and internal schema to use as an initilization correlation set.

I also created a flat file schema that would start the process of extracting the information. 

The next part is to have a map that takes the base schema and extracts the isa id, click qualifier, look gs id and the control number. The below picture has all of the functiods and their subequent arguments.

So lets take a look at the orchestration that does all of this work:

The first thing that happens is that the EDI document gets parsed against the flat file, pilule I have receive pipeline that parses the file for this orchestration to pick up.

It then creates the Who Is message, which is the sender id (ISA qualifier, ISA Sender Id, Control Number, GS Id) is the first thing that I extract, also, I store the filename in the crossreference table for use in the flat file message later on. The following code stores the data.

Microsoft.BizTalk.CrossReferencing.CrossReferencing.SetCommonID(“EDI”,”FileName”,whoIsMsg(hipaaEnvelope.ISA05)+whoIsMsg(hipaaEnvelope.ISA06)+whoIsMsg(hipaaEnvelope.ISA13),System.IO.Path.GetFileName(skeletonMsg(FILE.ReceivedFileName)));

Then the message is sent to the HIPAA accelerator, while the accelerator is starting to parse the file, I initialize the correllation set by sending the whoIsMsg to the message box and immediately pick it up again.

Now the orchestration is waiting to pick up the subsequent HIPAA messages (well, for at least for 10 minutes), it then picks up the XML version of the HIPAA document. It then creates the flat file and assigns the filename from the original message using the following code:

flatfileMsg(FILE.ReceivedFileName)=skeletonMsg(FILE.ReceivedFileName);

Through my contact page you can get a copy of the code. Please follow the instructions in the readme.txt so that you can implement the Cross Referencing funtionality along with importing the 837 schema, as the code needs to have those pieces imported.

Jan 172007
 

Unlike BizTalk, the HIPAA EDI Subsystem group needs to be set up as a domain group when setting up a distributed system. Local groups will give this error about invalid characters, and all places that log the data, event log, log file, do not explain any further the issue when going thru the configuration of the HIPAA EDI accelerator.

 

Update: Don’t bother trying to install a distributed enviornment when running under a NT4 Domain group, yes you can get BizTalk to communicate successfully with a remote SQL Server, but the HIPAA Confuration Framework will not validate it against an NT4 Domain.

Jan 032007
 

A recent question was asked me  as follows:

How to change the Segment Terminator in the HIPAA 837 schema to ‘~’ (tilt)? I think in my schema currently I have CF/LF. I need to change that to ‘~’. Currently my output comes like this.

ISA*00*          *00*          *ZZ*1234567        *ZZ*7654321        *070129*1247*U*00401*000010247*0*P*>

GS*HC*1234567*7654321*20070129*124745*10247*X*004010X098A1

I need the output like the following

ISA*00*          *00*          *ZZ*1234567        *ZZ*7654321        *070129*1247*U*00401*000010247*0*P*:~GS*HC*1234567*7654321*20070129*124745*10247*X*004010X098A1

This is actually not set in the schema, cialis but in the port configuration, discount when you set up the send port you specify a wrap segments, patient along  with the original segment terminator

In this case you want to have Wrap Segments set to No, which will not put a CR/LF after the segment terminator.

After you make the change the setting here, you need to restart either the HIPAA service or the EDI service.

Dec 122006
 

This blog entry has been literally a year in the making!

While working at a client, the requirement to seperate claims, decide which system it went to (QNXT or the exising mainframe system).

I decided that using the multiple 837 schema was the best approach. This means that the HIPAA accelerator is going to take the HIPAA file and submit claims (in XML format) individually to the message box. With those messages, I created a singleton orchestration process to pick up each of the messages. It would pick up each of the messages, and individually go through some calls to find out which system it went to.

Once the decision was made, I would concatonate the message to the rest of the messages that have already come in for this HIPAA transaction.

What I saw happening was that during the concatonation process it was taking longer and longer to append the current message to the rest of the messages.

Directions changed, and we moved from having BizTalk being the routing application for various reasons; speed of processing being one of the many reasons.

I worked at another client, and the same issue came up. We started off working with eligibility files (834) and I started with the same approach, and immediately saw the concatonation process increasing in time to complete as the more messages it processed. This time we were able to test with some significantly large files so I could get some real numbers to look at.

It started out taking 1 second to process the first subscriber in the 834, and then by the time it was down to subscriber 1000, it was taking 10 seconds to finish the process. I then needed to come up with a different approach, because these were relitivley small files, and we were looking at getting files that had 200,000 subscribers.

I thought, I need a way to store the data in a manner that will not continually increase in time as the dataset grew, what could I use? Well, a database table came to mind, I could place the data in a table, process them and then when it has all been completed, I then could just extract the data out of the database table and send it off.

I implemented sending the data to a database table and not concatonating the messages together. Once I started testing I immediately saw an improvement in performance! In looking at the details, I was seeing that the first subscriber took 1 second to process, so also did subscriber 1000!

I was not satisfied though: if each subscriber was going to take 1 second, then the graph below shows the time to process the file.

Subscribers Minutes Hours
1,000 16.67 0.28
2,000 33.33 0.56
3,000 50.00 0.83
4,000 66.67 1.11
5,000 83.33 1.39
6,000 100.00 1.67
7,000 116.67 1.94
8,000 133.33 2.22
9,000 150.00 2.50
100,000 1,666.67 27.78

The question then was, how do I process them faster? How could I send them to the database faster than what improvements I have already done. I might be able to optimize the extraction and sending to the database a little, but even if I were to cut it in half, I would still be looking at 13 plus hours to complete a single file.

What if I ran multiple occurrences of the extraction process at the same time? I would break my singleton orchestration, but I would essentially open up the flood gates, and it could process as many messages as possible at the same time. The next question then came to mind, what about the very distinct possibility that there would be table locking issues as I would be doing multiple inserts into the same table at once? I need a highly optimized process to insert data into the database that can handle the possiblity of many inserts happening at once. I am also not a database guru, so I needed something that someone else has developed that I can implement.

BAM – it hit me. BAM (Business Activity Monitoring) is optimized to accept many messags and insert them into a table and it definately has to be designed to capture many messages at the same time. There are two flavors of BAM that can be invoked from 2004, DirectEventStream and BufferedEventStream. I decided that because using DirectEventStream would cause performance issues, going to the BufferedEventStream route would be possibly the best approach. So I have many messages being processed, and then BAM data is sent to the MessageBox to be inserted into the BAMPrimaryImport database when BizTalk got around to it.

I implemented this approach, and increased the processing speed from 1 subscriber per second to 10 per second!

The next issue was, how do I know when it is complete and when can I extract the data from the BAM database? I needed a monitoring service to watch and see when inserts were done for this file and once it has completed, extract the data and create the output.

What if I had each of the processes that sent data to BAM send a message to another orchestration and consumes those messages, as soon as the messages quit coming, go and check the database to make sure that the rows are there, as soon as all of the rows are there, then extract the data.

This is where I thought would be a very simple process, it ended up being yes (kinda), but I normally have to do things the hard way before finally getting it working successully, and this did not stray too far from my past experiences.

This is the design that I had, many orchestrations would be running, I would have an orchestration that would be picking up all of the messages created by the HIPAA to BAM orchestrations, as soon as I quit receiving the messages, I would make sure that the same number of rows in BAM matched the same number that I picked up. Once everything matched, I would extract the data. I have to check the number of rows against what I picked up because with BufferedEventStream, messages are sent to the MessageBox and inserted when resources are availble, not directly like DirectEventStream. So I could get the last message from the HIPAA to BAM orchestration before the last row is inserted. Below the vision I had:

 

This is where it got fun!

After using Kevin Lam’s blog as a guide, I implemented forward partner direct binding.

I have cre
ated a simple prototype on implementing the forward partner direct binding approach. The first orchestration consumes all messages from a particular port. The sample message looks like this:

<ns0:Root SenderId=”123456xmlns:ns0=”http://PartnerPortExample.Input>

  <Data>

      <Information>Information</Information>

  </Data>

</ns0:Root>

It would then create create a message that just had the SenderId to be sent to the Singleton Orchestration that it would correlate on and pick up all messages for that SenderId.

<ns0:Root SenderId=”123456xmlns:ns0=”http://PartnerPortExample.Status />

I promoted the SenderId in the http://PartnerPortExample.Status message. One key thing to take away is that the property item needs to be a MessageDataPropertyBase. If it is a MessageContextBase it will not work. If the promoted field is a context property, the the subscription engine cannot match the message the the HIPAA to BAM orchestration to the Singleton Orchestration, it will state, that no matching subscription could be found.

I then set outgoing port on the HIPAA to BAM orchestration to Direct, and chose the Singleton orchestration as the partner port. In the Singleton orchestration, I set the partner port to itself.

I also set up the correlation set to drive off of the SenderId.

Below are some screen shots of the prototype:

Here is the Process orchestration that takes the original file, and extracts the SenderId into the Status message, some things to notice is that the binding on the InternalPort is set to Direct and the Partner Orchestration Port is set to the Singleton orchestration.

The code in the Assign Promotion Message Assignment shape is the following:

StatusMessage(PartnerPortExample.id)=ExternalMessage.SenderId;

 

Here is the Singleton Orchestration that loops thru capturing all of the messages that have the PartnerPortExample.id correlation set.

It then creates an message recording the number of files it processed an sends the following message with the SenderId as the filename:

<ns0:Root Count=”95xmlns:ns0=”http://PartnerPortExample.Result />

Here is the code in the message assignment shape:

 

TempXML=new System.Xml.XmlDocument();
TempXML.LoadXml(“<ns0:Root Count=\””+System.Convert.ToString(Count)+”\” xmlns:ns0=\”http://PartnerPortExample.Result\” />”);
ResultMessage=TempXML;
ResultMessage(FILE.ReceivedFileName)=StatusMessage(PartnerPortExample.id);

 

I want to thank Jeff Davis, Keith Lim, Kevin Lam, and Adrian Hamza on helping me determine that you cannot have context properties be the correlation set on partner ports. 

Through my ‘contact me’ page,let me know if you would like to get a copy of my prototype.

Dec 072006
 

Lets just say that there is a set of published companion documents that a client has distributed to a client. In there, the list of valid values are even more restritive than what is published publicly. These codesets are valid only for this client, and additional codesets are valid for another client.

The ability to make partner specific schemas is possible! Yes, I know, this is what we all have been losing sleep over!

If you really want to make specific schemas it is possible. This works on both the v3.0 and v3.3.

It is pretty straight foward:

  1. Make sure that you have the party defined
  2. The party has to be defined in a recieve location, if it is not defined, the schema will not show it as an available Partner URI in the schema.
  3. In the schema that you want to be specifically defined for, click on the root node of the schema, in the properties there is a Partner URI drop down list.
  4. Choose the partner you wish to make the customization for from the drop down list.
  5. Change the target namespace to make it unique for that partner
  6. Make your modifications to the schema.
  7. Validate the schema so the customization is uploaded to the database.
  8. Deploy

Note: If you do not see the Partner URI in the list even though you have defined the recieve location (possibly with a binding file), re define the recieve location by changing the address to another partner, and then assign it correctly again, restart the HIPAA service, and it should show up.

You now have a specific partner schema that BizTalk will parse depending on the party definition you have defined. There you can make your specific mapping for that client. In my case I am able to have the accelerator parse the client’s file using their additionally restrive schema definition, and the subsequently map it to the standard schema wherein it then goes into the universal mapping that has been developed for all of the clients.

Actually this functionality exists in both the HIPAA accelerator and the Base EDI adapter. From ages past I worked on the Covast Accelerator, but I can’t remember if that functionality is present there.

Not too bad.

Nov 232006
 

The question that I have raised a few times in my own head is how can you use the HIPAA accelerator in a clearing house enviornment. My definition of a clearing house is a company that represent different companies, generic so essentially the sender id that you define in the send handler is not always the same.

There is currently no way to create multiple send handlers, stuff so Microsoft has created a pipeline component that you specify the sending ids that will reset it to. Because most clients need an acknowledment, I would always recommend using the adapter and not the pipeline componet as I documented here.

So my suggestion is as follows for inbound transactions:

  1. Create a recieve port called DMZ (port where clients are going to be able to drop their files off) and in there create seperate recieve locations for each of the clients inbound folders and use the passthru pipeline.
  2. Create a send port using the passthru pipeline filtering on the RecievePortName==”DMZ” and have the location defined as the %documentshome%PickupEDI (remember you actually have to put the full path in the send port).
  3. Then you have recieve locations using the HIPAA_EDI adapter for each of the partners.

So my concurrent suggestion is as follows for outbound transactions:

  1. Using logic derived from this entry, create seperate send ports using the HIPAA_EDI adapter and point the folder location to a staging area.
  2. Obtain the pipeline fix from Microsoft Support and create seperate pipelines for each of the partners that you represent.
  3. Create a seperate receve location that picks up the file using the passthru pipeline picking up the file that was created in step 1.
  4. Create a seperate send port that filters off the recieve location defined in step 3 and use the newly created pipeline that will replace the sending parties information that is defined in the send handler to the correct sending ids.
Oct 232006
 

 

Many a client has requested to have custom filenames using the HIPAA_EDI adapter.

I originally planned on using a custom ASM pipeline component and setting the ghost port to batching, site and within the orchestration setting the RecievedFileName.

Found out that when that happens, physician no file is created, also, no errors are created! This really smells like a bug, as there is no documentation on this!

After contacting Microsoft, and submitting a ticket, this is the response I got back:

The HIPAA pipeline is not architected to support batching. It simply does a one-on-one translation from XML to EDI and that’s it.
The batching mechanism is depending on the data in the audout table and the HIPAA pipeline does not persist any data.
The only way you can create outbound EDI batches is via the HIPAA EDI adapter.

I ended up having to re-engineer the process so that (written about here), by single threading the entire process, files do not get batched together and files can be accounted for.

Oct 192006
 

Here is the list of changes that need to take place if you are having issues either in validating or deploying the WPC schemas.
Click below…

1. Stop the HIPAA Service
2. Delete all files (normally the .wrk and .eif files) in this directory C:Documents and SettingsAll UsersApplication DataMicrosoftBizTalk Server 2004HIPAA_EDISubsystemEIF
3. Start the HIPAA Service
4. in the {Program Files}Microsoft BizTalk Accelerator for HIPAA 3.0HIPAA_EDISubsystem double click on the compeif.exe and wait for it to complete
5. Open up the parame table in the BizTalkHIPAA_EDIDb and change the column repolock to NULL by pressing CTRL+0

You now should be able to re-validate or re-deploy the schemas that were causing the issue.

Oct 102006
 

 

Here is a list of the behaviors that I have found when using the custom pipeline component vs the adapter for receiving and sending data:

Pipeline

  • Allows multiple recieve ‘locations’ for a single party, search where you define a receive location that has the configuration information and then multiple pick ups and drop offs, order where they can be any adapter (file, order ftp, http, MQ Series, etc.) can be used
  • Ability to use Send Port Groups (archiving is an example of this usage)
  • Does not check the interchange control number for duplication per party
  • Data that is recieved or sent, is not archived in the location defined in the documentshome location in the parame table
  • No information is logged to the audin and audout tables, so the HIPAA_EDI reports in HAT is not available
  • Promoted properties in the WPC schemas are not invoked
  • No ACK is generated for delivery back to client

Adapter

  • Only a single port can be defined for receiving and sending
  • Copies of both edi and xml data is stored in the documentshome location
  • Interchange control number checking is done
  • Information is stored in the audin and audout tables, so the HIPAA_EDI reports are accessable
  • Using the DefaultXML pipelines; promoted properties can be used for correllation/filtering/orchestration usage

I have put together a really simple example of both usages, so differences can be seen.

Instructions:

  1. Deploy the Solution
  2. Using the Deployment wizard, import the Binding.xml file
  3. In the Management Console, in the HIPAA_EDI adapter, in the properties of the Send Handler, set the the Party to Us
    HIPAAConfig1.JPG
    HIPAAConfig2.JPG
  4. Restart the HIPAA service
  5. Enable the Receive Locations
  6. Start the Send Port
  7. Drop the file into the ..input directory and see if the file is produced in the ..output directory
  8. Notice the following information in HAT – HIPAA_EDI Reports