Often I have been in a situation where I used a SQL statement to generate a lot of other SQL statements and those generated statements I executed using SPUFI or DSNTIAD or a similar tool. My problem with this approach has always been that the generated SQL statements becomes more than 80 bytes long and SPUFI only accepts the 72 first characters. Because of this I have been looking for a simple way of splitting SQL statements. I have now found a solution by using SORT.
The OUTFIL OUTREC= command in SORT is able to do many things (look at tip week 36/2003) and it is also capable of splitting one input record into more output records. It can be done in the following manner:
This example requires that dataset MY.ORIGINAL.DATA has a LRECL of at least 271 in order to work. Dataset MY.SPLITTED.DATA must have LRECL=80 and both dataset must have RECFM=FB. If MY.ORIGINAL.DATA contains a lot of SQL statements where each statement occupies one record only and each statement is more than 145 bytes long then dataset MY.SPLITTET.DATA contains the same number of SQL statements, but each of them now occupies three records each and the SQL statements are ready to be executed by SPUFI. One of the weird details about SPUFI is that input records for SPUFI are concatenated without any interpretation or compression, so you do not have to worry about splitting in the middle of a column name for instance.
When it occured to me that SORT is able to split records then things started rolling for me. I now use SORT to generate all sorts of things. Imagine that input is a list of dataset names that you want to create. Normally you will hand code an IEFBR14 job step or you make some ISPF file tailoring, but it is really easy to replace those approaches with SORT. When the idea really catches you, there is a whole world of possibilities in the simple / option in the OUTFIL OUTREC= command. I might have forgotten to mention that / is not yet available in the OUTREC FIELDS= command.