Archive for May, 2007

Using XML Publisher in eBusiness Suite (Part 4)

Tuesday, May 29th, 2007 by

Development of reports using XML Publisher lends itself to a slightly differrent approach to the normal “one report, one developer”. XML Publisher reports require two distinct skill sets for the different stages of the report production:

Data Collection
This is my name for the part of the process that involves creating an XML data file. I’ve covered some of
the methods that can be used to generate XML in my previous three Blogs. The main skills for a developer
creating this part of the report are of the more traditional PLSQL type. Knowledge of the Application is
also extremely important.

Data Presentation
This is the process of taking the XML data file and converting it to the final output. The developer
building the Data Presentation stage does not need any ORACLE skills at all, but does need to be
conversant with a whole new set of technologies, the main ones being XPath and XSL.

These two roles need a common starting-point to work toward and this is done by creating an XML Schema file. An XML Schema (with the file extension XSD) is an XML file that describes the structure of XML files.

XSD files, like any XML file, can be created using your favourite text editor (Notepad, vi, etc), but I would recommend using an XML-aware editor, preferably one that can also display the XML Schema graphically. I have recently been using Altova’s XMLspy, which I would highly recommend, but if you want something cheaper, ORACLE’s own JDeveloper has suitable functionality.

Below I’ve created a very simple XML Schema, describing a data file of Invoices for a given Date range:

(more…)

Spying On Elves At Work

Tuesday, May 29th, 2007 by

The great thing about concurrent programs is that they run in the background, busily getting their job done without pestering you. There is a price you pay for this apparent hassle free life, and you may no even know you’re paying for it.

Like all other programs, system resources are needed to satisfy concurrent programs; and as we all know, system resources are the limited and valuable. The price you, or rather you and your DBA, are paying for background processes is the lack of visibility of the use, and potential abuse, of these system resources.

It is primarily the role of the DBA to protect the system, and ensure that no rogue processes eat up CPU or hog the disks, so why as a developer should you care? Well, many DBAs have what developers call ‘a personality defect’; this is when a DBA asks loudly in the middle of the office the equivalent of “who’s been trip-trapping over my bridge?”, with the corresponding ugly troll look and tact. This can be mitigated, and if done right the DBA can owe you one, by judicious use of the DBMS_APPLICATION_INFO package.

This Oracle built-in package is there to allow the DBA to have more detail of a running process; the value of ‘module’ is automatically set by Apps to the name of the concurrent request and the value of ‘action’ is automatically set by Apps to ‘Concurrent Request’. This can be seen in TOAD using DBA > Session Browser, or can be seen using:

SELECT s.SID, s.SERIAL#, s.MODULE, s.ACTION
FROM V$SESSION s

The value that is not set automatically by Apps is the running state of the process, because Apps has no idea what the process is attempting to do. It is this that we need to update in our concurrent programs. Long running operations have visibility to the DBA via the view V$SESSION_LONGOPS; the SQL engine posts messages to this view for known long running SQL operations such as table scans, which is useful in itself, but this view can also be updated from within a concurrent program using:

dbms_application_info.set_session_longops

This call should be made throughout the concurrent program, preferably via a liberal scattering of a function call, to show the progress that is being made by the current operation. The following are a few notes that should make it easier to use.

1. The initial value of rindex should be set to the package constant dbms_application_info.set_session_longops_nohint;

2. The value of slno should be null and never updated by you.

3. The value of op_name is set once per row and isn’t updated for the same row on subsequent calls.

4. The value of target should be null, or the object_id (from all_objects) of the object being worked on. For this to be correctly translated in to an object name, the parameter target_desc must explicitly be set to NULL. This is set once per row and isn’t updated for the same row on subsequent calls.

5. The value of sofar is the value that you would change on each call, usually the percentage done.

6. The value of totalwork is set once per row and isn’t updated for the same row on subsequent calls. Calls with the same parameters for everything else, but a different value for totalwork force a new row to be written.

7. The value of target_desc  is set once per row and isn’t updated for the same row on subsequent calls.

8. The value of units is set once per row and isn’t updated for the same row on subsequent calls.

Here is a little test program illustrating its use.

DECLARE
   r   PLS_INTEGER;
   s   PLS_INTEGER;
   t   PLS_INTEGER;
BEGIN
   IF r IS NULL THEN
      r := dbms_application_info.set_session_longops_nohint;
   END IF;
   FOR a IN 1 .. 100 LOOP
      dbms_application_info.set_session_longops
         (rindex => r                     
         ,slno => s                     
         ,op_name => ‘TestOpName’          
         ,target => t                     
         ,sofar => a                     
         ,totalwork => 100                   
         ,target_desc => ‘Description Msg’     
         ,units => ‘TestUnits’           
         );
         — ADD DELAY HERE IF REQD TO SEE IN ACTION
   END LOOP;
END;

In a different session, run the following query to view the running program:

SELECT *
FROM v$session_longops
WHERE opname = ‘TestOpName’

Ok, so background processes aren’t really Elves, and checking a view that you have purposefully written to isn’t exactly spying, but hey… use your imagination!

Back-to-Back Order Process Removes the Mystery About How a Customer Order Will be Filled

Thursday, May 24th, 2007 by

There are times when you absolutely, positively have to know the status of supply for a customer’s order and know with a very high degree of certainty that the supply is dedicated to that sales order.

Oracle’s ATO process allows you to have this confidence by hard pegging a sales order line to a work order or a purchase order.   In this article, we will focus on the SO-PO link which is sometimes called a back-to-back order.

The back-to-back process is similar to that for drop ship sales orders.  In summary, a sales order line will create a purchase requisition (or work order) which can then be autocreated into a purchase order either manually or automatically.

There are several key differences between a drop ship and back-to-back process.  The biggest one is the flexibility on the part of the CSR, Customer Service Rep., to control the method of supply for the sales order. 

With a drop ship, the CSR has the ability to change the default supply source, internal or external, on the line and can override the defaults for an item.

However,  back-to-back orders are controlled by the ATO and make/buy item attributes.  These controls can’t be changed on the sales order line.  This is a strength or weakness depending on the business requirements. 

Another key difference is the ship-to address for the supplier will be an internal warehouse as opposed to a customer location.  The sales order line will need to be picked and shipped after the PO has been received.

One of the great advantages to using back-to-back process over shipping from stock is that there is a clearly defined relationship between the sales order and purchase order.  This allows the CSR to know the status, expected delivery times, and confidence that the inventory will be used for the designated sales order line.

The mechanics for how to set up this process are similar to those that are described in the previous drop ship articles.  In summary:

1)     Create a saleable, buy item, that has the ATO attribute enabled.

2)     Setup a sourcing rule or BOD that sources the item to a vendor

3)     Also setup a quotation or ASL if you want to automatically create the PO with the correct price.

Like many things Oracle, this is a helpful tool when applied to the right situation. You  should have it in your supply chain tool box.

-Dean

Desire is the key to motivation, but it’s determination and commitment to an unrelenting pursuit of your goal — a commitment to excellence — that will enable you to attain the success you seek. - Mario  Andretti
 

Using XML Publisher in eBusiness Suite (Part 3)

Tuesday, May 22nd, 2007 by

So far in this series I haven’t actually talked about XML Publisher at all: I wanted to concentrate on the creation of XML data files. This was for a good reason - without XML datafiles XML Publisher is pretty useless. This time I’m going to briefly cover two other XML production methods that are loosely related.

Oracle Reports 6i or 9i

When any Oracle Reports RDF/REP is run under Concurrent Manager you can specify the Output Type in the Concurrent Program. Ordinarilly, this would be set to Text or PDF, but if you set this to XML the Reports Engine will generate an XML file that replicates your Report’s Data Model. The names of tags of the XML elements in the file are normally generated by Reports, but they can be overridden by the programmer.

Below is a fragment of the output that you might get if you set RAXINV (Receivables Invoice Print) to XML:

  <?xml version="1.0"?>
  <!-- Generated by Oracle Reports version 6.0.8.25.0 -->
  <RAXINV>
    <LIST_G_ORDER_BY>
      <G_ORDER_BY>
        <ORDER_BY>Conf Test1</ORDER_BY>
        <LIST_G_INVOICE>
          <G_INVOICE>
            <CUSTOMER_TRX_ID>*</CUSTOMER_TRX_ID>
            <TRX_NUMBER>Conf Test1</TRX_NUMBER>
            <TRX_TYPE>INV</TRX_TYPE>
            <TRX_TYPE_NAME>Invoice</TRX_TYPE_NAME>
            <OPEN_RECEIVABLE_FLAG>Y</OPEN_RECEIVABLE_FLAG>
            <TRX_DATE>15-APR-07</TRX_DATE>
            <CUSTOMER_NUMBER>1005</CUSTOMER_NUMBER>
            <INVOICE_CURRENCY_CODE>GBP</INVOICE_CURRENCY_CODE>
            <BILL_CUST_NAME>Cope Management</BILL_CUST_NAME>
            <BILL_ADDRESS1>Rathbourn Street</BILL_ADDRESS1>
            <BILL_ADDRESS2></BILL_ADDRESS2>
            <BILL_CITY>Bath</BILL_CITY>
            <BILL_POSTAL_CODE>BA14 5PY</BILL_POSTAL_CODE>
            <BILL_COUNTRY>GB</BILL_COUNTRY>
            <LIST_G_INV_TERM>
              <G_INV_TERM>
                <TERM_SEQUENCE_NUMBER>1</TERM_SEQUENCE_NUMBER>
                <PURCHASE_ORDER_NUMBER>PO67234234</PURCHASE_ORDER_NUMBER>
                <TERM_DUE_DATE_FROM_PS>26-APR-07</TERM_DUE_DATE_FROM_PS>
                <PRINTING_PENDING>N</PRINTING_PENDING>
                <TRX_FREIGHT_AMOUNT>0</TRX_FREIGHT_AMOUNT>
                <TERM_NAME>11 Days</TERM_NAME>
                <LIST_G_LINE_TOTAL>
                  <G_LINE_TOTAL>
                    <LINE_OF_TYPE_FRT>A</LINE_OF_TYPE_FRT>
                    <ORDER_BY1>1</ORDER_BY1>
                    <LINK_TO_LINE>1540</LINK_TO_LINE>
                    <LIST_G_LINES>
                      <G_LINES>
                        <LINE_NUMBER>1</LINE_NUMBER>
                        <LINE_CUSTOMER_TRX_ID>1608</LINE_CUSTOMER_TRX_ID>
                        <LINE_CUSTOMER_TRX_LINE_ID>1540</LINE_CUSTOMER_TRX_LINE_ID>
                        <LINE_CHILD_INDICATOR>0</LINE_CHILD_INDICATOR>
                        <LINE_TYPE>LINE</LINE_TYPE>
                        <LINE_DESCRIPTION>Commercial Revenue</LINE_DESCRIPTION>
                        <LINE_QTY_ORDERED>1</LINE_QTY_ORDERED>
                        <LINE_QTY_INVOICED>1</LINE_QTY_INVOICED>
                        <LINE_UOM>Each</LINE_UOM>
                        <LINE_UNIT_SELLING_PRICE>55</LINE_UNIT_SELLING_PRICE>
                        <LINE_EXTENDED_AMOUNT>55</LINE_EXTENDED_AMOUNT>
                        ....

(more…)

Waste Management

Friday, May 11th, 2007 by

There are two types of waste. Good waste and bad waste. Good waste is expected and is part of a process that produces a good product with some waste. Bad waste is neither expected nor desired and should be reduced or preferably avoided all together.

Imagine having friends round for dinner. When you’re cooking, if your home made mayonnaise curdles, do you keep adding more egg, or more olive oil in the vain hope that it may all come right in the end? No, if it’s all gone pear shaped, you throw it away, accept the loss and start again. This is an example of good waste, insofar as recognizing early on that mistakes have been made, it is less costly in the long term to start again, and the final product will be better for it.
Do you waste your money, frittering away your hard earned cash on items that you will never use? This is an example of bad waste and needs to be reduced as much as possible.

The management of waste should be an important part of the life of a developer.

In computing terms, good waste is related to design and is the recognition that it is better to redesign something, such as a query, form or report instead of continually attempting to hack around a bad design. Here, the first attempt should be viewed as a prototype, which possibly clarified the requirements and can be analysed to see why it doesn’t work very well. During the redesign phase, the lessons learned from the first attempt can be incorporated in to the new design.

Ideally, this would never happen as you’d get the design right first time, after all, you’re a highly paid consultant, right? Although life isn’t that simple, and often requirements are not clear at the outset, progress is demanded by the client and the solution evolves. Another scenario maybe that you are taking on a failed solution from another person, and the expectation is that you will ‘make it work’.

In these situations, a reality check often comes in handy. Whilst it may leave a sour taste in the mouth of the client that has just spent a chunk of money on a poor development, it is usually better (for them and for you) to recognise it early and correct it than muddle on, blaming users for poor requirements or previous developers for poor solution designs.

From a long term maintenance and performance perspective, the key is in the design, which is why getting the design right in the first place is the top priority, and recognising a bad design and rectifying it by rewriting it is the next best thing. Refactoring a design over time from bad to good comes third (and is often the only practical solution) and continuing with a bad design is the worst situation.

In computing terms, bad waste is related to resources and is the unnecessary use of scarce resources, such as CPU, I/O and memory to perform a task when the same task could be performed using far fewer resources.

The wasting of resources can have a profound effect on the system as a whole. There are finite resources within any computer system, and if any are wasted then that takes away from resources that are available to service other users or background processes.

This type of waste can take a number of forms.

First, there is the unnecessary use of resources. For example any reports that are scheduled to run periodically that nobody ever uses? Should these be run on demand? Should these remain periodic, but the time gap between runs be increased? What about reports that query the live system; do these reports need to be accessing real-time data or would a time interval on the data be acceptable, where the results from the report are saved and simply fed back if re-reported within a 15 minute time window? Should the reporting even be based on the live transactional database? This type of waste is identified based on the client’s priorities of what user actions should consume the resources first.

Second, there is the incorrect use of resources, such as using the wrong index or performing a full table scan when the question being asked of the database is straight forward. This may require actions such as gathering of up to date statistics, the introduction of histograms on skewed data, the creation of new indexes or the partitioning of tables. Here, your knowledge of the data and the database objects being used comes in to play to assist Oracle to correctly use the available resources to answer the query.

Finally, there is bad design. The recognition of bad design often comes from the identification of overuse of available resources for the given task. When this occurs and is identified, see good waste above.


Cialis Buy Bontril Buy Biaxin Buy Ambien Buy Fioricet Buy Generic Viagra Buy Phentermine Buy Lexapro Buy Zovirax Buy Cipro Buy Prozac Buy Nexium Buy Carisoprodol Flexeril Seroquel Buy Ultram Buy Zocor Ephedrine Buy Effexor Buy Clonazepam Buy Bupropion Buy Fioricet Levitra Butalbital Buy Celexa Buy Propecia Buy Bontril Phentermine Online Buy Norvasc Bontril Meridia Adderall Meridia Buy Online Xanax Buy Clonazepam Ativan Darvocet Zovirax Seroquel Buy Didrex Buy Zyprexa Bontril Tramadol Online Flexeril Buy Butalbital Soma Zyban Buy Lipitor Xanax Online Norco Levitra Buy Viagra Buy Lorazepam Carisoprodol Buy Line Xanax Tenuate Prozac Buy Lipitor Buy Xanax Buy Zocor Buy Norco Buy Online Xanax Buy Levitra Zyrtec Biaxin Buy Diazepam Buy Adderall Norvasc Adipex Zyprexa Buy Hydrocodone Phentermine Online Fioricet Cialis Buy Ephedrine Buy Didrex Buy Hydrocodone Online Cheap Phentermine Propecia Adipex Buy Zyprexa