Need to ignore Trailing characters.

Home Page Forums BizTalk 2004 – BizTalk 2010 Need to ignore Trailing characters.

Viewing 3 reply threads
  • Author
    Posts
    • #16762

      Hi,

      The I have defined a Flat File schema. The ending Record is followed by a CR+LF which I have set for. I need to create an OutputXML for each Input Record. So I set the required properties and use a flat file disassembler in the Receive Pipeline.

      But the Input I receive have many CR+LF after the data input(Actual Data+(CR+LF)+(CR+LF)+..and so on) in my input txt file. So the pipleine component fails.

       Keep getting the error.

      —————————————————————————————————————————————————————————————————————————-

      Event Type: Error
      Event Source: BizTalk Server 2006
      Event Category: BizTalk Server 2006
      Event ID: 5719
      Date:  05/12/2006
      Time:  11:01:49
      User:  N/A
      Computer: Test
      Description:
      There was a failure executing the receive pipeline: "pipelinecomponent, Version=1.0.0.0, Culture=neutral, PublicKeyToken=9ac468cd94333c86" Source: "Flat file disassembler" Receive Port: "RxPort" 
      Reason: The remaining stream has unrecognizable data.

       ——————

      How can I solve the issue such that BizTalk realizes to stop checking for more input after the last record. That is if it doesnt find anything after the last record defined in the schema, stop checking for more data.

      Can I set any properties.

      Please help me out. Thanks in advance.

       

       

                               

    • #16968

      Have you looked at the schema property to Ignore Trailing Delimiters?

       

      It’s on the <Schema> node if I remember correctly.

      • #16974

        Hi Stephen,

        I did try that property, but the same errors are being thrown. In the scenario, I use Envelope Schema, and an Inner Schema. I also am using the Recoverable interchange processing property. But the error still persists. It tells after the last line. Unexpected data found while looking for '00' where 00 is the tag identifier of the Inner schema Record.

        Input format is something like:

        Order1(CR+LF)

        Order2(CR+LF)

        ….

        OrderN(CR+LF)

         

        I have an Envelope Schema – Infix , Hex 0x0d 0x0a

                      Order Schema – Postfix , hex 0x0d 0x0a   

        I need to de-batch the input. So Order Schema  (min occurs – 1, Max occurs -1)

        Envelope Schema Imports the Order Schema – (make Order Schema max occurs – unbounded)

         

         

        Thanks

        AH

      • #16975

        Hi Stephen,

        I did try that property (It doesnt come on the <schema> node, but at the record level), but the same errors are being thrown. In the scenario, I use Envelope Schema, and an Inner Schema. I also am using the Recoverable interchange processing property. But the error still persists. It tells after the last line. Unexpected data found while looking for '00' where 00 is the tag identifier of the Inner schema Record.

        Input format is something like:

        Order1(CR+LF)

        Order2(CR+LF)

        ….

        OrderN(CR+LF)

        (Order Has various Sub Records in it)

        I have an Envelope Schema – Infix , Hex 0x0d 0x0a

                      Order Schema – Postfix , hex 0x0d 0x0a   

        I need to de-batch the input. So Order Schema  (min occurs – 1, Max occurs -1)

        Envelope Schema Imports the Order Schema – (make Order Schema max occurs – unbounded)

         

         

        Thanks

        AH

        • #16977

          Debatching flat files does not use the Envelope/Document schema model. This is for Xml files only.

          To debatch you should just need the Order schema

          Order – Postfix, hex 0x0D 0x0A maxOccurs = 1

          The flat file disassembler will iterate through the flat file and generate a new message for each Order it finds.

          You can send me your schemas and an example document if This does not work.

          • #16981

            Hi Greg,

            Thanks for the reply. I did try the above method previously (in most of the blogs, explanation for de-batching). I even tried with the Envelope approach. Both work alright, but in my case I get an extra error. I get the same problem for both approaches. I may need to do some changes in my schema to accomodate that change. I will mail you the schemas and example.

            But I even explained the scenario here: If you might have experienced it prev, might get a fast solution.

            ************

            I have a scenario where I need to de-batch a flat file. The flat file consists of many orders. Each Order has various records in it.

            Eg: Flat file : 1st 2 fields can be used as TAG Identifiers. 

            00-header record1(CR+LF)                 
            05-Order related Record 1A(CR+LF)
            10-Order related Record 1B(CR+LF)
            99-Trailer record1(CR+LF)
            00-header record2(CR+LF)
            05-Order related Record 2A(CR+LF)
            10-Order related Record 2B(CR+LF)
            99-Trailer record2(CR+LF)

            and so On…

             

            So I have an Order Schema (min occurs – 1 , max occurs 1) – Tag Identifier for Order Schema – 00. Order Record is delimited, where as its children – Header, Rel 1, Rel 2, Trailer  are positional (all are compulsary).  Child Order – Postfix , delimiter – 0x0d 0x0a

            No TagID for Header Record as I have used it to identify Order Record, others do have a Tag Identifier (05, 10, 99).

            Have an Envelope schema importing the Order Schema ( here I make it 1, unbounded ). Child Order – Infix , hex 0x0d 0x0a .

            In the pipline I place the Order Schema and I set Recoverable Interchange processing – True (need to pass all the valid Orders through).

            Testing: For a valid batch input – I get the de-batched output.

            For an Invalid Order for a Batch input:

            Order1                                          — valid
            00headerdetails                             — Order2 – invalid
            990000othertrailer fields                  — Order2
            Order3                                          —  Order3 valid.

             

            My Input file has 3 orders in which Order 2 is invalid.

            Problem:

            While running the application. The valid files order1, order3 pass through (that is fine). Problem occurs with order2. I just need to get a single suspended message. But I get 2.
            BizTalk takes the 00 after 990000otherTrailerdetails as the the TAG Identifier of Order Record. And tries to validate it. Because its not complete it fails to validate.
            Its like I have an extra Order in my input. In the event viewer I expect 1 error (for a single invalid order2), but I get 2 errors(2 orders – order 2 split into 2).

            How can I take care of this ??

            Thanks in advance

             

             

             

    • #18170

      Long time ago, but did you get any help with this issue? I have the same problem using the ffdasm.exe.

      • #18517

        Hi,

        I have posted several issues in that single thread. If its the final issue what you are talking about, I had to write a custom pipeline component where i went I placed a unique identifier when a new line started with “00”. This helped me identify the order from that unique identifier and not “00”

        Something like this –

        before –

        00——

        —-

        99——–

         

        now –

        OR:00——

        —–

        99——–

         

        And i identify each messages using OR: now.

        Please ket me know if it helped or you are mentioning some other issue.

        AH

        • #18721

          I know it is very late on this thread but just to help anyone else newly
          getting into this error; here are few tips to trace and fix the issue.
          1) If your problem is with extra CRLF at the end of the message, many
          already given the answer here or else where. The best way is to add a
          separate dummy “Trailer Schema” to the “Receive pipeline” as suggested at
          http://www.biztalkgurus.com/forums/p/3833/7416.aspx.
          2) Otherwise:
          a. If you have the message and the schema on hand, use FFDasm.exe to see
          where the issue is by incrementally submitting the message segment wise to
          FFDasm. Once the problem segment is identified check properties like the “Tag
          Identifier” of message segment to see if that is matching with your
          corresponding schema element. Most likely it is the schema mismatch issue in
          that segment. There could be a mismatch of Tag Identifier or missing
          element(s) or “positional length” etc field properties. Validate the data
          found in the message at the problem segment with the schema properties to see
          where the problem is and fix the schema accordingly.
          b. If you have a generated schema by a tool and are receiving the message on
          the wire from another system (like SAP Adapter generated IDoc schema and the
          IDoc arriving from SAP into your orchestration), first try to capture the
          whole message without applying the generated schema (if there is no other way
          to see the message at this point). May be you can use a schema with one
          string element to catch the entire message and then write it to a file. Once
          you can see the message, take a look at the “Tag Identifier” property of each
          element in the schema against the Tag Identifiers that are found in your
          message. If your message is huge you can take the help of FFDasm.exe as in
          the previous case to identify the problem segment.

          For instance in my case, the ORDERS01.xsd schema generated by “mySAP
          Business Suite Schema Generation Wizard” has a child record “E1EDP19” with
          “Tag Identifier” set to “E2EDP19001” where as the ORDERS01 IDoc that is being
          sent by SAP has the segment marked with “E2EDP19002”. When the “Flat file
          disassembler” got to this segment, it failed, as it could not recognize the
          tag E2EDP19002 that is not declared in the schema. Result – the exception
          ’There was a failure executing the receive pipeline: . Source: “Flat file
          disassembler” Reason: The remaining stream has unrecognizable data.’ By
          simply modifying the “Tag Identifier” property value to “E2EDP19002” in the
          generated schema, the error vanished.

          Adding the dummy Trailer Schema to the Receive pipeline would swallow the
          rest of the message from the problem tag. This may not be helpful in cases
          where there is some useful data being sent in the problem part of the
          message. Again if your problem is only with the extra CRLF that is getting
          into the message and there is no issue with the schema then adding the dummy
          Trailer schema is really helpful as suggested at
          http://www.biztalkgurus.com/forums/p/3833/7416.aspx.

          //SadguruSainath//

    • #18722

      Hi  –

      Best thing for BizTalk is to use a preprocessing script to modify the file to allow BizTalk to parse it properly.

      Also, I don’t think anyone mentioned that if you have the last field of a record empty, or that scenario can possibly occur, your file parsing will break, as that is a scenario that the parser cannot handle. This is a known Bug that Microsoft has no intention of fixing.

      – wa 

Viewing 3 reply threads
  • The forum ‘BizTalk 2004 – BizTalk 2010’ is closed to new topics and replies.