Social Icons

Showing posts with label Utilities. Show all posts
Showing posts with label Utilities. Show all posts

TPT in deatail

TPY Syntax
Teradata Parallel Transporter supports the following types of SQL statements:

  • Data Manipulation Language (DML): Insert, Update, Delete, Upsert, Merge, and Select
  • Data Control Language (DCL): Give, Grant, and Revoke
  • Data Definition Language (DDL): Create, Drop, Alter, Modify, Delete Database, Delete User, and Rename

OS Commands
Use the OS Command operator to send commands to the operating system on the client system

Delimited Data
Delimited data are variable-length text records with each field or column separated by one or more delimiter characters. Delimited data are also known as VARTEXT.
Use the Data Connector operator to read or write delimited data

Large Object Data Types
There are two kinds of large object data type:
Character large object (CLOB)
Binary large object (BLOB)
Three operators support the CLOB and BLOB data types.
The Inserter operator can insert CLOB and BLOB data types into a Teradata table
The Selector operator can export CLOB and BLOB data types from a Teradata table
The Data Connector operator can read/write CLOB and BLOB data types from/to a file.
Selecting the wrong operator to process the CLOB or BLOB data type terminates the job.

TPT script Structure

Building TPT Scripts 
TPT uses a SQL-like scripting language for extract, basic transformation, and load functions. This easy-to-use language is based on SQL, making it familiar to most database users. All operators use the same scripting language. This represents an improvement over the individual utilities, each of which has its own unique scripting language. A single script can be used to define multiple operators and schemas to create complex extracting and loading jobs. 
There are only a few statements that are needed to build a TPT script. A quick look at the basic statements can be seen here:

DEFINE JOB
Defines the overall job and packages together all following DEFINE and APPLY statements. 

DEFINE SCHEMA 
Defines the structure of a data object in terms of columns of specific data types. A given schema definition can be used to describe multiple data objects. Also, multiple schemas can be defined in a given script.

DEFINE OPERATOR 
Defines a specific TPT operator to be used in this job.

DEFINE DBMS 
Defines an instance of a database server to be used in this job.

APPLY 
A processing statement used to initiate a TPT load, update, or delete operation.

Generally TPT script has two sections
1) Declarative section 
2) Executable section

Declarative section—all schema and operator definitions

Define the job name for the script
Provide the description of job
Schema Definition
Operator Definition #1
….
Operator Definition #n
Executable section - -
Specifies all sql statements processing that is extract, load, filter, delete and update done by Apply definition
It also requires a job Variable file.

The structure of TPT script is as mentioned below

Script example 

DEFINE JOB FILE_LOAD
DESCRIPTION 'Load a Teradata table from a file'
(
/* TPT declaration section/*

DEFINE SCHEMA Trans_n_Accts_Schema
(
Account_Number VARCHAR(50),
Trans_Number VARCHAR(50),
Trans_Date VARCHAR(50),
Trans_ID VARCHAR(50),
Trans_Amount VARCHAR(50)
);
DEFINE OPERATOR DDL_OPERATOR
TYPE DDL
ATTRIBUTES
(
VARCHAR PrivateLogName = 'ddl_log',
VARCHAR TdpId = @jobvar_tdpid,
VARCHAR UserName = @jobvar_username,
VARCHAR UserPassword = @jobvar_password,
VARCHAR ErrorList = '3807'
);
DEFINE OPERATOR FILE_READER
TYPE DATACONNECTOR PRODUCER
SCHEMA Trans_n_Accts_Schema
ATTRIBUTES
(
VARCHAR PrivateLogName = 'dataconnector_log',
VARCHAR DirectoryPath = @jobvar_datafiles_path,
VARCHAR FileName = 'accounts.txt',
VARCHAR Format = 'Delimited',
VARCHAR OpenMode = 'Read',
VARCHAR TextDelimiter = '|'
);
DEFINE OPERATOR LOAD_OPERATOR
TYPE LOAD
SCHEMA *
ATTRIBUTES
(
VARCHAR PrivateLogName = 'load_log',
VARCHAR TdpId = @jobvar_tdpid,
VARCHAR UserName = @jobvar_username,
VARCHAR UserPassword = @jobvar_password,
VARCHAR TargetTable = @jobvar_tgt_dbname || '.Trans',
VARCHAR LogTable = @jobvar_wrk_dbname || '.LG_Trans',
VARCHAR ErrorTable1 = @jobvar_wrk_dbname || '.ET_Trans',
VARCHAR ErrorTable2 = @jobvar_wrk_dbname || '.UV_Trans'
);
STEP Setup_Tables

(
/* TPT Executation section/*
APPLY
('DROP TABLE ' || @jobvar_wrk_dbname || '.ET_Trans;'),
('DROP TABLE ' || @jobvar_wrk_dbname || '.UV_Trans;'),
('DROP TABLE ' || @jobvar_tgt_dbname || '.Trans;'),
('CREATE TABLE ' || @jobvar_tgt_dbname
|| '.Trans (Account_Number VARCHAR(50),
Trans_Number VARCHAR(50),
Trans_Date VARCHAR(50),
Trans_ID VARCHAR(50),
Trans_Amount VARCHAR(50));')
TO OPERATOR (DDL_OPERATOR);
);
STEP Load_Trans_Table
(
APPLY
('INSERT INTO ' || @jobvar_tgt_dbname || '.Trans(Account_Number,
Trans_Number,
Trans_Date,
Trans_ID,
Trans_Amount)
VALUES(:Account_Number,
:Trans_Number,
:Trans_Date,
:Trans_ID,
:Trans_Amount);')
TO OPERATOR (LOAD_OPERATOR[2])
SELECT * FROM OPERATOR (FILE_READER[2]);
);
);

Note:  Example, Job Variables can be maintained in separate variables file or we can pass directly 

Job execution
tbuild -f <script file name> -z <checkpoint interval>

The -z option sets the checkpoint interval to the number of seconds specified.

SET CHECKPOINT INTERVAL 160 SEC
Or
SET CHECKPOINT INTERVAL 12 MINUTES

The checkpoint interval can be specified in a job script between the last DEFINE statement
and the APPLY statement(s).

tbuild
We have seen the tbuild command in many of the previous examples. This command is used to initiate a TPT job. The following key options may be used with tbuild:
-f Specifies the filename to be used as input.
-u Specifies job variable values which are to be applied.
-z Specifies a checkpoint interval to be used for the client side.
-s Specifies that job execution is to start at a specific job step.
-v Specifies that job attributes are to be read from an external file.
-l Specifies latency interval - how often to flush stale buffers.
-n Specifies that the job should continue, even if a step return code is greater than 4 

Note: If the checkpoint interval is specified both in the job script and with the tbuild -z command option, the -z option takes precedence.

Troubleshooting a Failed Job

Common Job Failures and Remedies
There are two categories of job failures. The evaluation and correction of each type of failure must be handled differently:
• Some jobs fail at launch, during execution of the tbuild statement, but before the initial job step have run.
• Some jobs launch successfully, and one or more job steps may execute successfully, but thejob fails to run to completion.

The following sections describe common errors encountered by Teradata PT jobs.
  • When the Job Fails to Begin Running.
  • When a job is launched but fails to begin execution, the associated errors appear in the public log. Errors are detected according to the launch sequence:
1. Teradata PT first processes the options specified in the tbuild command. If it detects tbuild                   command errors, the job stops.
Error types encountered: tbuild command errors
        2 If Teradata PT encounters no tbuild command errors, it then parses the job script and creates a            parallel job execution plan that will perform the operations specified in the APPLY            
           statement(s) in the job script.
Errors types encountered:
  • Pre processor errors -- Incorrect use of job variables or the INCLUDE directive.
  • Job script compilation errors -- Syntactic and semantic errors. 
      3.Only when script compilation is successful and the execution plan has been generated does the            Teradata PT allocate resources for and launch the various internal tasks required to execute the             job plan

Errors types encountered: System resource errors

The following common types of tbuild errors may occur at job launch:
  • User errors
  • executing the tbuild command
  • Script compiler errors
  • System resource errors
  • semaphore errors
  • socket errors
  • shared memory errors

Continue Reading...

FAST EXPORT

TeradataWiki- Teradata Utilities Fast export
FastExport ,the name itself is spells to exports data from Teradata to a Flat file. But BTEQ also does the same thing.The main difference is BTEQ exports data in rows and FastExport exports data in 64K blocks. So if its required to load data with lightning speed Fast export is the best choice.

FastExport is a 64K block utility it falls under the limit of 15 block utilities. That means that a system can’t have more than a combination of 15 FastLoads, MultiLoads, and FastExports.

Basic fundamentals of FastExport
  1. FastExport EXPORTS data from Teradata.
  2. FastExport only supports the SELECT statement.
  3. Choose FastExport over BTEQ when Exporting Data of more than half a million+ rows
  4. FastExport supports multiple SELECT statements and multiple tables in a single run
  5. FastExport supports conditional logic, conditional expressions, arithmetic calculations, and data
  6. conversions.
  7. FastExport does NOT support error files or error limits.
  8. FastExport supports user-written routines INMODs and OUTMODs

Sample fast export Script

.LOGTABLE Empdb.Emp_Table_log;
.LOGON TD/USERNAME,PWD;

BEGIN EXPORT
SESSIONS 12;

.EXPORT OUTFILE C:\TEMP\EMPDATA.txt
 FORMAT BINARY;

SELECT EMP_NUM    (CHAR(10))
      ,EMP_NAME   (CHAR(50))
      ,SALARY     (CHAR(10))
      ,EMP_PHONE  (CHAR(10))
FROM Empdb.Emp_Table;
      )

.END EXPORT;
.LOGOFF;

FastExport Modes
FastExport has two modes: RECORD or INDICATOR
RECORD mode is the default, but you can use INDICATOR mode if required.
The difference between the two modes is INDICATOR mode will set the indicator bits to 1 for column values containing NULLS.

FastExport Formats
FastExport can export data in below formats
  • FASTLOAD
  • BINARY
  • TEXT
  • UNFORMAT
Continue Reading...

TPT

TeradataWiki-Teradata Utilities TPT
The Teradata Parallel Transport (TPT) utility is combination of BTEQ, FastLoad, MultiLoad, Tpump, and FastExport utilities. So TPT can perform
  • Insert data to tables
  • Export data from tables
  • Update tables
TPT works around the concept of Operators and Data Streams.
TERADATA TPT
TPT
In the following diagram showing you mainly three components.

  1. Producer or READ Operator
  2. Filter Operator or TRANSFORM Operator 
  3. Consumer Operator or WRITE Operator
The Producer Operator performs read the Queues, Files, Relational Databases, and Non-Relational Sources.
The Filter Operators Transforms data from INMODs, WHERE Clauses, APPLY Filters, and User-Defined functions.
The Consumer Operator performs INSERTS (Load), Updates, SQL Inserts, and Tpump like Streams.
TERADATA TPT

How to Run a TPT Script

The easiest way to run a TPT script is to use the TBuild utility. You first create your script and then run TBuild, passing
TBuild the script name to run.

In the below example just create a sample script called Scriptname.txt(this is not complete script)
Then run by using TBuild –f command.

1) Create a script

DEFINE JOB CREATE_SOURCE_EMP_TABLE
                          (
DEFINE OPERATOR DDL OPERATOR 
DESCRIPTION 'TPT DDL OPERATOR'
                 TYPE DDL
             ATTRIBUTES
                        (
VARCHAR TDPID = 'LOCALTD',
VARCHAR USERNAME ='DBC', 
VARCHAR PASSWORD ='DBC',
);

2) Using the command prompt run TBuild command

   TBuild -f C:\Temp\Scriptname.txt
Continue Reading...

TPUMP

TeradataWiki-Teradata Utilities Tpump
TPump is shortened name for Teradata Parallel Data Pump . As learned Fastload and Multiload are loads huge volume of data. But TPump loads data one row at a time, using row hash locks..
Because it locks at this level,and not at the table level like MultiLoad,TPump can make many simultaneous, or concurrent, updates on a table.

TPump performs Inserts,Upadtes,Deletes and Upserts from Flat filed to populated Teradata tables at ROW LEVEL.

TPump supports
  • Secondary Indexes
  • Referential Integrity
  • Triggers
  • Join indexes
  • Pumpdata in at varying rates.

Tpump also have limitations.
  • No concatenation of input data files is allowed.
  • TPump will not process aggregates, arithmetic functions or exponentiation.
  • The use of the SELECT function is not allowed.
  • No more than four IMPORT commands may be used in a single load task.
  • Dates before 1900 or after 1999 must be represented by the yyyy format for the year portion of the date, not the default format of yy.
  • On some network attached systems, the maximum file size when using TPump is 2GB.
  • TPump performance will be diminished if Access Logging is used.

Tpump supports One Error Table.The error table does the following:
  • Identifies errors
  • Provides some detail about the errors
  • Stores a portion the actual offending row for debugging
Like the other utilities, a TPump script is fully restartable as long as the log table and error tables are not dropped

A Sample TPump Script
  1. The script on the following page follows these steps:
  2. Setting up a Logtable
  3. Logging onto Teradata
  4. Identifying the Target, Work and Error tables
  5. Defining the INPUT flat file
  6. Defining the DML activities to occur
  7. Naming the IMPORT file
  8. Telling TPump to use a particular LAYOUT
  9. Telling the system to start loading
  10. Finishing and log off of Teradata

LOGTABLE  EMPDB.EMP_TPUMP_LOG;

LOGON TDDB/USERNAME,PWD;

BEGIN LOAD
PACK 5
RATE 10
ERROR TABLE EMPDB.TPUMP ERROR;

.LAYOUT RECLAYOUT;

.FIELD    EMP_NUM         * INTEGER;
.FIELD    DEPT_NUM       * SMALLINT;
.FIELD    FIRST_NAME    * CHAR(20);
.FIELD    LAST_NAME     * VARCHAR(20);
.FIELD    SALARY            * DECIMAL(8,2);

.DML LABEL EMP_INS;
INSERT INTO EMPDB.EMP_TABLE
(EMP_NUM, DEPT_NUM, FIRST_NAME, LAST_NAME, SALARY)
VALUES 
(:EMP_NUM, :DEPT_NUM, :FIRST_NAME, :LAST_NAME, :SALARY);

.IMPORT INFILE C:\TEMP\TPUMP_FLAT_FILE.txt;

LAYOUT RECLAYOUT
APPLY EMP_INS

.END LOAD;
.LOGOFF;
Continue Reading...

MULTI LOAD

TeradataWiki-Teradata Utilities Multiload
MultiLoad has the capability to load multiple tables at one time from either a LAN or Channel environment. That why its names as MULTI LOAD.

The data load can perform multiple types of DML operations, including INSERT, UPDATE, DELETE and UPSERT on up to five (5) empty or populated target tables at a time.

Limitations of Multiload
Unique Secondary Indexes are not supported on a Target Table:Like FastLoad, MultiLoad does not support Unique Secondary Indexes (USIs). But unlike FastLoad, it does support the use of Non-Unique Secondary Indexes (NUSIs) because the index subtable row is on the same AMP as the data row.

Referential Integrity is not supported: The Referential Integrity defined on a table would take more system checking to prevent referential constraints.

Triggers are not supported at load time: Disable all the Triggers prior to using it.

No concatenation of input files is allowed: It could impact are restart if the files were concatenated in a different sequence or data was deleted between runs.

No Join Indexes: All the join indexes must be dropped before running a MultiLoad and then recreate them after the load is completed

Will not process aggregates, arithmetic functions or exponentiation:If you need data conversions or math, you might be better off using an INMOD to prepare the data prior to loading it.

Multiload requires mainly Four components

Log Table:Log table stores the processing record information during load.This table contains one row for every Multiload running on the system.

Work Table(s): MultiLoad will automatically create one worktable for each target table. Usually in IMPORT mode multiload could have one or more work tables and in DELETE moode you have ony one. The Purpose of work tables are 1) to perform DM tasks 2) APPLYing the input data to the AMPs.

Error Tables: Like Fastload, Multiload also two error tables
The first Error Table (ET). It contains all translation and constraint errors that may occur while the data is being acquired from the source(s)
The second Uniqueness Violation (UV) table that stores rows with duplicate values for Unique Primary Indexes (UPI).

Target table: Target tables can have data. Multiload can load the data where target table alredy loaded.

MultiLoad Has Five IMPORT Phases:

Phase 1: Preliminary Phase : Ita Basic setup phase.Its used for several preliminary set-up activities for a successful data load.
Phase 2: DML Transaction Phase: All the SQL Data Manipulation Language (DML) statements are sent  to Teradata database as Multilaod supports multiple DML functions.
Phase 3: Acquisition Phase: Once the setup completes the PE's plan stored on each AMP.Then Locks the table headers and the actual input data will also be stored in the worktable.
Phase 4: Application Phase: In this phase all DML opreations are applied on target tables.
Phase 5: Cleanup Phase: Table locks will be released and all the intermediate work tables will be dropped.

MultiLoad has full RESTART capability in all of its five phases of operation.

Sample Multiload Script:

The script on the following page follows these steps:

  1. Setting up a Logtable
  2. Logging onto Teradata
  3. Identifying the Target, Work and Error tables
  4. Defining the INPUT flat file
  5. Defining the DML activities to occur
  6. Naming the IMPORT file
  7. Telling MultiLoad to use a particular LAYOUT
  8. Telling the system to start loading
  9. Finishing loading and logging off of Teradata


LOGTABLE  EMPDB.EMP_TABLE_LOG;

LOGON TDDB/USERNAME,PWD;

.BEGIN IMPORT MLOAD
TABLES EMPDB.EMP_TABLE
WORK TABLES EMPDB.EMP_WT
ERROR TABLE EMPDB.EMP_ET
            EMPDB.EMP_UV;

LAYOUT FILECOLDESC1;

.FIELD EMP_NUM  * INTEGER ;
.FIELD SALARY   * DECIMAL(8,2);

.DML LABLE EMP_UPD;
UPDATE EMPDB.EMP_TABLE
SET SALARY=:SALARY
WHERE EMP_NUM=:EMP_NUM;

.IMPORT INFILE C:\TEMP\MLOAD_FLAT_FILE.txt
LAYOUT FILECOLDESC1 
APPLY EMP_UPD;

.END MLOAD;
.LOGOFF;

Continue Reading...

FAST LOAD

TeradataWiki-Teradata Utilities Fastload
Fastload, the name itself telling that loads data in a fast way.That means it loads huge amount of data from flat file into EMPTY tables.

Manily FastLoad was developed to load millions of rows into empty Teradata tables so it is fast.
FastLoad will create a Teradata session for each AMP in order to maximize parallel processing.This gives awesome performance in loading data.

There are more reasons why FastLoad is so fast. Below are limitations of Fast load.
1) No Secondary Indexes are allowed on the Target Table: Usually UPI and NUPI are used in Teradata to distribute the rows evenly across the AMPs.Secondary indexes are stored in a subtable block and many times on a different AMP from the data row.

2)No Referential Integrity is allowed: The Referential Integrity defined on a table would take more system checking to prevent referential constraints.

3)No Triggers are allowed at load time: Fast load focused on data load with high speed. So triggers not allowed.

4)Duplicate Rows (in Multi-Set Tables) are not supported: Multiset tables are allowed duplicate data. Fastload can load the data into multiset tables but duplicate rows are discarded.

5)No AMPs may go down (i.e., go offline) while FastLoad is processing: The down AMP must be repaired before the load process can be restarted

6)No more than one data type conversion is allowed per column: Data type conversion cause high resource utilization on the system

Fastload requires mainly three components

Log table
Log table stores the processing record information during load.This table contains one row for every FastLoad running on the system

Empty Target table
As mentioned earlier target tables should be empty.

Error tables(two)
Each FastLoad requires two error tables. These are automatically created during run. These will populated only errors occurred during the load.

The first error table is for any translation errors or constraint violations
For example, if a column is defined as integer but the data from source the data is coming in CHAR format.i.e wrong data.

The second error table is for errors caused by duplicate values for Unique Primary Indexes.


FastLoad Has Two Phases:

FastLoad divides its job into two phases, both designed for speed.

Phase 1 or Acquisition Phase
The primary purpose of phase 1 is to get data from host computer into Teradata System.
The data moves in 64 K blocks and is stored in worktables on the AMPs.
The data is not stored in the correct AMP.

Phase 2 or Application Phase.
Once the data is moved from the server, each AMP will hash its worktable rows.
Each each row transfers to the worktables where they permanently resides
Rows of a table are stored on the disks in data blocks

Simple fastload Script:

  1. Logging onto Teradata
  2. Defining the Teradata table that you want to load (target table)
  3. Defining the INPUT data file
  4. Telling the system to start loading
  5. Telling the system to to insert data into final target.
  6. End session.


LOGON TDDB/USERNAME,PWD;

CREATE TABLE EMPDB.EMP_TABLE
( EMP_NUM     INTEGER
,DEPT_NUM     SMALLINT
,FIRST_NAME   CHAR(20)
,LAST_NAME    VARCHAR(20)
,SALARY       DECIMAL(8,2)
UNIQUE PRIMARY INDEX(EMP_NUM);

DEFINE  EMP_NUM    (INTEGER)
       ,DEPT_NUM   (SMALLINT)
       ,FIRST_NAME (CHAR(20))
       ,LAST_NAME  (VARCHAR(20))
       ,SALARY     (DECIMAL(8,2))
FILE=C:\TEMP\EMP_FILE.txt;

BEGIN LOADING EMPDB.EMP_TABLE
ERRORILES EMPDB.EMP_ERR1, EMPDB.EMP_ERR2
CHECKPOINT 10000000;

INSERT INTO EMPDB.EMP_TABLE
VALUES
( :EMP_NUM
  ,:DEPT_NUM
  ,:FIRST_NAME
  ,:LAST_NAME
  ,:SALARY );

END LOGOFF;
LOGOFF;

when can we RESTART fastlaod and cannot?

You Cannot RESTART FastLoad if
  • The Error Tables are DROPPED
  • The Target Table is DROPPED
  • The Target Table is CREATED

You Can RESTART FastLoad
  • The Error Tables are NOT DROPPED in the script
  • The Target Table is NOT DROPPED in the script
  • The Target Table is NOT CREATED in the script
  • You have defined a checkpoint

Below are fast load commands used for creating fastload scripts:

AXSMOD to specify an access module (e.g., OLE-DB provider) that provides data to the FastLoad utility on network-attached client systems.

SESSIONS max min to specify the number of sessions. max = maximum number of sessions that will be logged on.

ERRLIMIT to control a runaway error condition, such as a mis-definition of the input data.
Specify the maximum number of error records you want to occur before the system issues an ERROR and terminates the load.

TENACITY to specify the number of hours FastLoad will try to establish a connection. The default is no tenacity.
The statement must be placed before LOGON.

SLEEP to specify the number of minutes FastLoad waits before retrying a logon.
The default is 6 minutes. The statement must be placed before LOGON.

Continue Reading...

BTEQ

TeradataWiki-Teradata Utilities  Bteq
Batch TEradata Query (BTEQ) is pronounced Bee-Teeeek.
BTEQ was the first utility and query tool for Teradata. BTEQ can be used as a Query tool, to load data a row at a time into Teradata and to export data off of Teradata a row at a time.

The features of BTEQ:
  • BTEQ can be used to submit SQL in either a batch or interactive environment.
  • BTEQ gives the outputs in a report format, where Queryman outputs data in a format more like a spreadsheet.
  • As said Bteq is an excellent tool for importing and exporting data.
There are mainly 4 types of BTEQ Exports.

Export DATA
This is set by .EXPORT DATA.
Generally, users will export data to a flat file format.This is called Record Mode or DATA mode.
If the data has no headers or white space between the data contained in each column and the data is written to the file in a normal format.

Export INDICDATA
This is set by .EXPORT INDICDATA.
This mode is used to export data with extra indicator bytes to indicate NULLs in column for a row.

Export REPORT
This is set by .EXPORT REPORT
In this mode the output of BTEQ export would return the column headers for the fields, white space, expanded packed or binary data.
Its just looks like a report with column headers and data.

Export DIF
This called as Data Interchange Format, which allows users to export data from Teradata to be directly utilized for spreadsheet applications like Excel, FoxPro and Lotus.

Below is the example of BTEQ IMPORT Script.We are taking data form a flat file from C:\TEMP\EMPDATA.txt and importing records into Empdb.Emp_Table.

.LOGON USERNAME/PWD;

.IMPORT DATA FILE=C:\TEMP\EMPDATA.txt, SKIP=2

.QUIET ON  
.REPEAT*

USING (EMP_NUM    INTEGER
      ,EMP_NAME   VARCHAR(20)
      ,EMP_PHONE  VARCHAR(10)
      )
INSERT INTO Empdb.Emp_Table
VALUES( :EMP_NUM
       ,:EMP_NAME
       ,:EMP_PHONE
       );
       
.QUIT
.LOGOFF

Below is a review of those commands for the above example.
  • QUIET ON limits BTEQ output to reporting only errors and request processing statistics.
  • REPEAT * causes BTEQ to read a specified number of records or until EOF. The default is one record. Using REPEAT 100 would perform the loop 100 times.
  • The USING defines the input data fields and their associated data types coming from the host.
Continue Reading...