Wednesday, January 31, 2007

SQL SERVER:-WHAT IS THE DIFFERENCE BETWEEN ISQL AND OSQL

Both are command prompt utilities but as per HELP:-

The isql utility allows you to enter Transact-SQL statements, system procedures, and script files; and uses DB-Library to communicate with Microsoft® SQL Server™ 2000.

And

The osql utility allows you to enter Transact-SQL statements, system procedures, and script files. This utility uses ODBC to communicate with the server.

Use OSQL whenever possible as it supports UNICODE which ISQL doesnt.
---: I am not responsible for any damages happened from the suggestion of my blog :---
Reach me at : m.a.hasim@inbox.com

Labels:

SQL SERVER:HOW TO RUN A SQL SCRIPT WHICH IS IN YOUR LOCAL DRIVE?

a/
Create a file in C:\ drive Hasim.sql is as follows:-

SELECT t1.empname [Employee], COALESCE(t2.empname, 'No manager') [Manager] FROM emp t1 LEFT OUTER JOIN emp t2 ON t1.mgrid = t2.empid;

b/
Create a table emp in tempdb


use tempdb;

CREATE TABLE emp
(
empid int,
mgrid int,
empname char(10)
);

c/
Populate the table


INSERT emp SELECT 1,2,'Hasim'
INSERT emp SELECT 2,3,'Arun'
INSERT emp SELECT 3,NULL,'Divya'
INSERT emp SELECT 4,2,'Parthiban'
INSERT emp SELECT 5,2,'Priyanka'

d/
Run that sql file in C:\ drive as follows:-


EXEC master..xp_cmdshell 'isql -SAHASIM -Usa -Psa -ic:\Hasim.sql -n'

---: I am not responsible for any damages happened from the suggestion of my blog :---
Reach me at : m.a.hasim@inbox.com

Labels:

SQL SERVER: Maximum number of columns per table

Ans:-
========

1,024

Interested to find out maximum number of columns possible in a select statement et all?
Check the MSDN page:-
Maximum Capacity Specifications

---: I am not responsible for any damages happened from the suggestion of my blog :---
Reach me at : m.a.hasim@inbox.com

Labels:

SQL SERVER: What is CHECKSUM

FROM HELP FILE

CHECKSUM computes a hash value, called the checksum, over its list of arguments. The hash value is intended for use in building hash indices. If the arguments to CHECKSUM are columns, and an index is built over the computed CHECKSUM value, the result is a hash index, which can be used for equality searches over the columns.

We can easily detect if there is a changes in any rows in any places ( in any particular fields)

-----------Example---------

use pubs

SELECT au_id , au_lname ,
CHECKSUM ( au_id , au_lname ) AS chk
FROM authors

--409-56-7008 Bennet 271639220

update authors set au_lname = 'Bennet' where au_id = '409-56-7008'

SELECT au_id , au_lname ,
CHECKSUM ( au_id , au_lname ) AS chk
FROM authors

--409-56-7008 Bennet 271639220( No changes in checksum )

update authors set au_lname = 'Bennnet' where au_id = '409-56-7008'
SELECT au_id , au_lname ,
CHECKSUM ( au_id , au_lname ) AS chk
FROM authors
--409-56-7008 Bennnet -1628839244( Checksum changed )

update authors set au_lname = 'Bennet' where au_id = '409-56-7008'
SELECT au_id , au_lname ,
CHECKSUM ( au_id , au_lname ) AS chk
FROM authors
--409-56-7008 Bennet 271639220 ( Again the check sum is the same )


---: I am not responsible for any damages happened from the suggestion of my blog :---
Reach me at : m.a.hasim@inbox.com

Labels:

Thursday, January 25, 2007

Unix:How to know the code page of your HP UX

Most UNIX system has more than one code page installed.But they may use US-ASCII code page by default.You can change code page by changing the variable as LC_CTYPE,LC_ALL and LANG_C.

Now the code page seen in my HP UX is as follows:-

$ locale
LANG=
LC_CTYPE="C"
LC_COLLATE="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_MESSAGES="C"
LC_ALL=
** Here "C" means ASCII

To change the language to English and making the system to use Latin1( ISO 8859-1):-
$ setenv LANG en_US.iso88591
$locale
LANG=
LC_CTYPE="en_US.iso88591"
LC_COLLATE="en_US.iso88591"
LC_MONETARY="en_US.iso88591"
LC_NUMERIC="en_US.iso88591"
LC_TIME="en_US.iso88591"
LC_MESSAGES="en_US.iso88591"
LC_ALL="en_US.iso88591"


---: I am not responsible for any damages happened from the suggestion of my blog :---
Reach me at : m.a.hasim@inbox.com

Labels:

Wednesday, January 24, 2007

Informatica 7.1: Can we use Excel as target?

In case of UNIX machine neither you can make Excel as source or target as there is no ODBC driver there in UNIX for this purpose.But,in case of windows you can make Excel as source but not as target.

Workaround:-
*** Contributed By Mr. Nitant Mahajan ***
1/
In workflow manager.
Set file properties-->keep delimeter as CSV-->Optional Quotes Double
* If your data do not contain COMMA then only COMMA delimer will solve the purpose *
File should be saved into .csv format.



2/
Set the delimeter as TAB and save file as .xls



---: I am not responsible for any damages happened from the suggestion of my blog :---
Reach me at : m.a.hasim@inbox.com

Labels:

Informatica 7.1:Can we install two instances of INFA 7.1 into the same HP UX machine?

No for windows installation as in that case the new installation will delete the earlier services.

Yes for UNIX.But:-
1/
In that case the installation should must be done under different users account.
2/
Use different ports.


---: I am not responsible for any damages happened from the suggestion of my blog :---
Reach me at : m.a.hasim@inbox.com

Labels:

Tuesday, January 23, 2007

HowTo: How to run sqlplus queries on the fly?

Generally in shell script we use to invoke sqlplus passing some sql script like as follows:-
---------Sample.sql--------------------
select sysdate from dual;
---------------------------------------
---------Script1.sh--------------------
echo "Starting the script"
sqlplus scott/tiger@orcl @sample.sql
echo "Finishing the script"
---------------------------------------
Now you can run that query on the fly like as follows:-

$pwd
/mnt/hasim/home
$print "select sysdate from dual;"|sqlplus scott/tiger@orcl

Or
---------Script2.sh--------------------
echo "Starting the script"
sqlplus scott/tiger@orcl"<<"EOF # Remove both quotes,put due to error while posting
select sysdate from dual; -- Write n no of sql scripts
EOF
echo "Finishing the script"
---------------------------------------

Can check more in this page
How do I interact with Oracle via the UNIX Korn shell?


---: I am not responsible for any damages happened from the suggestion of my blog :---
Reach me at : m.a.hasim@inbox.com

Labels:

Informatica 7.1:./install: pmeval: Execute permission denied.

One of my friend has faced this problem:-
$ pwd
/home/XXXX/informatica_install/PowerCenter/ipf64
$ cd /home/XXXX/informatica_install/PowerCenter
$ chmod 777 ipf64
$ cd /home/XXXX/informatica_install/PowerCenter/ipf64
$ ./install
Please choose the language to run install in from the choices below:
1. English
2. Japanese
0. Exit
> 1
===================================================
Welcome to Informatica Installation Wizard.
All Informatica Products Copyright 1996-2007.
===================================================
Please enter your Product Key for Informatica PowerCenter: /home/XXXX/informatica_install/PowerCenter/ipf64
./install: pmeval: Execute permission denied.
grep: can't open /tmp/tkf.22131
grep: can't open /tmp/tkf.22131
grep: can't open /tmp/tkf.22131
The key you entered is key. Please enter the product license key.



Solution:-
This problem occurs when we try to enter wrong product key ( need to enter 64bit Product key instead of 32 bit product key )
Once proper key is set,things will be smooth.
HOW TO FIND YOUR UNIX IS 32bit or 64 bit



---: I am not responsible for any damages happened from the suggestion of my blog :---
Reach me at : m.a.hasim@inbox.com

Labels:

Informatica 7.1:Points to ponder while transferring data from

The problem I got like:

The source data is 88591 we use ETL informatica it is having 11 charcter set of data since 88591 is 1 byte of data it gives samecode for different charcterset for example it is having x(code)=man(japan),women(korean) it raises ambiguity in database for japan and korean language , and now at present target is also 88591 the same "x" is sent to target but here there own languge is set in their system , if koren guy sees it he will undersatnd it is women and if japan guy sees he will understand as man but now we are going to make target database into utf-8 and informatica run in unicode mode here the source system is in 88591 as i have told u earlier it may generate same code for some charcterset now when we load it into target which is utf-8 here it generates unique code for different charcterset but we need to identify the end user reqirement and give him yhe exact data.

example
If end user is korean in earlier case it is x but now utf-8 generates unique code so we need to tell to informatca before loading in to target there it supports all charcaterset and give unique code for each charcter set.

My intention:-
==============
We will be deciding at the time of running sessions or one time conversion yo flat file to utf-8 and then to target.


Although I know problem may seem hazy.Lets make it a lil bit clear before putting the solution.

A database named ABCD is defined to only support one character set(ISO-8859-1), data is getting populated here with data from multiple character sets like sjis,big5, GB2312 etc. We accept that the ordering of the data is according to ISO-8859/1

Slowly as time passes by ABCD will have text data in multiple different languages in multiple different character sets and later it becomes tough for identifying which language and character set the text belongs to. The UTF8 encoding of UNICODE, which keeps any current text in USASCII unchanged (the vast majority of our text data), but stores data from other character sets in 2, 3, or 4 byte units.

Now there is a requiremwnt to transfer data from ABCD toanother database named EFGH which is in UNICODE.So we need to be able to identify the character set of every text string.Lets assume we have identified that also.

Question is that how to perform that data transfer through INFA7.1


Solution:-
Thats can be done by INFA.Just keep following things in mind.

1/
Check what is the type of your source database character set ( select * from nls_database_parameters ) 2/ Check what is the type of your target database character set( select * from nls_database_parameters ) 3/ Check what data movement has been set for Informatica Server which you are to assign in your workflow.
( Go to the config file you use to pass while starting informatica server in UNIX )

Eg.
# Determine one of the two server data movement modes: UNICODE or ASCII.
# If not specified, ASCII data movement mode is assumed.
# ASCII:-PowerServer processes single byte character and does not perform codepage #conversion
#UNICODE:-Processes 2 bytes for a character.Enforce codepage validation

DataMovementMode=Unicode
/*************************************************************************************************
Set it Unicode,only then the end users will have full data else while there will be corrupt data.
If you are resetting,after resetting restart the Informatica Server service.
*************************************************************************************************/
4/
If you have set all those things right,then there is nothing to worry.Users should must see Data as per their locale.
5/
You may face some LM_ error while loading data through INFA.In that case revert me back with error log portion like
/*************************************************************************************************
MAPPING> CMN_1569 Server Mode: [UNICODE] CMN_1570 Server Codepage: [ISO
MAPPING> 8859-1 Western European]
*************************************************************************************************/
6/
If needed then disable codepage validation.

Some More pages I have referred while going through this.Mainly pages related to Oracle database.They are as follows:-
i/
A very basic knowledge about oracle character set conversion.
*** DONT ALTER YOUR DATABSE EVER TO SEE CHINESE CHARACTER - THEN YOUR DBA MAY SEND YOU TO CHINA ***
Exzilla

NLS_LANG FAQ From Oracle - Very Good One.
NLS_FAQ
HOW TO CHECK WHAT IS THE CODEPAGE FOR YOUR HPUX
---: I am not responsible for any damages happened from the suggestion of my blog :---

Reach me at : m.a.hasim@inbox.com

Labels:

How to know whether my Oracle is 32 bit or 64 bit? My Unix OS is 64 bit or not?

Check out this page:-
Oracle Advice

In my case:-
SQL> select address from v$sql where rownum <3;

ADDRESS
----------------
C000000089B2A230
C0000000897F8D00
Means oracle was 64 bit.

And Unix is 64bit also:-
$ getconf KERNEL_BITS
64

To check Unix/Linux OS version:-
$ uname -a
Linux abulhasim.fun.com 2.6.9-34.ELsmp #1 SMP Fri Feb 24 16:54:53 EST 2006 i686 i686 i386 GNU/Linux

Check this page:-
http://forums13.itrc.hp.com/service/forums/questionanswer.do?admit=109447627+1227684096982+28353475&threadId=1150062


---: I am not responsible for any damages happened from the suggestion of my blog :---
Reach me at : m.a.hasim@inbox.com

Labels:

Friday, January 19, 2007

SSIS v9.x:How To Pass Parameters At Runtime in dtexec

HOW TO SET VALUE AT RUNTIME
===================================

1/
Create a simple package having such data flow.

2/
Create a package level variable staying in package explorer tab as follows

3/
Set that varaible as your connection string for the source flat file as follows:



4/
Create configuration file from SSIS-->Package Configurations-->Add-->Etc-->Etc.
It will make a xml file.
5/
Run it from command prompt.
dtexec -f Flat2DB_configAtRunTime.dtsx
This time it will get that there is a parameter which bears the value for the sorce file`s connection string and after searching through the config file in the same directory as the package it will get the value of that parameter at runtime from the element named "ConfiguredValue".
Now wat if we want to pass the value at run time?Yes,that can be done also.To do so first change the config file`s "ConfiguredValue" to some unknown value so that you can be sure that what value you are tio pass at runtime works.And then run at command prompt.
dtexec -f Flat2DB_configAtRunTime.dtsx /Set \Package.Variables[User::MyVar].Properties[Value];"C:\Documents and Settings\Hasim\My Documents\TEST\SSIS\emp1.txt"
---: I am not responsible for any damages happened from the suggestion of my blog :---
Reach me at : m.a.hasim@inbox.com

Labels:

SSIS v9.x:Error codes

This page gives you all details about the error codes in Hex as well as in Dec
MSDN:Integration Services Error and Message Reference
---: I am not responsible for any damages happened from the suggestion of my blog :---
Reach me at : m.a.hasim@inbox.com

Labels:

Friday, January 12, 2007

Informatica 7.1: XSD and XML file for SCOTT Emp table.My first mapping in xml

1/
As a xml editor I used Stylus .
By this you can get data into xml file from EMP table.
File-->New-->DB to XML Datasource.
Connect to the databse.Select EMP table and get data as xml.

The stylus will run this following query
SELECT
XMLELEMENT(name "row",
XMLELEMENT(name "EMPNO",t.EMPNO),
XMLELEMENT(name "ENAME",t.ENAME),
XMLELEMENT(name "JOB",t.JOB),
XMLELEMENT(name "MGR",t.MGR),
XMLELEMENT(name "HIREDATE",t.HIREDATE),
XMLELEMENT(name "SAL",t.SAL),
XMLELEMENT(name "COMM",t.COMM),
XMLELEMENT(name "DEPTNO",t.DEPTNO)
)
FROM EMP t

to get data in XML.

The file is DOWNLOAD EMP XML FILE

2/
Create a schema for that emp.xml as XML-->Create Schema from XML content.
You can create a XSD or DTD ( internal/external)
DOWNLOAD EMP EXTERNAL DTD
DOWNLOAD EMP XML WITH INTERNAL DTD
DOWNLOAD EMP XSD

3/
Use that EMP XSD to create XML view.Tools-->Source Analyzer-->Import XML Defination.

4/
Develop necessary transformation and flow data.

POINTS TO PONDER:
=================

0>> Check in Workflow Manager for the path for source XML.
0>> Apply transformation if there is a need for data conversion.




---: I am not responsible for any damages happened from the suggestion of my blog :---
Reach me at : m.a.hasim@inbox.com

Labels:

Wednesday, January 10, 2007

Informatica 7.1: XML file as Source

What is a DTD

A DTD file contains metadata only.It contains the structure and the definitions of the elements and attributes which can be found in the main xml.

What is a XSD
Good basic tutorial is in W3Schools:-Introduction to XML Schema
XML namespace identifies a group of similar kind of elements belongs together.

What is the basic difference between XSD and DTD?

* DTD's are not namespace aware.

DTD's have #define, #include, and #ifdef -- or, less C-oriented,
the ability to define shorthand abbreviations, external content,
and some conditional parsing.

A DTD describes the entire XML document (even if it leaves "holes");
a schema can define portions.

XSD has a type system.

XSD has a much richer language for describing what element or attribute
content "looks like." This is related to the type system.

You can put a DTD inline into an XML document, you cannot do this with
XSD. This means DTD's are more secure (you only have to protect one
bytestream -- the xml/dtd -- and not multiple).

The official definition of "valid XML" requires a DTD. Since this may
be impractical, if not impossible, you often have to settle for
schema-valid, which is not quite the same.

In terms of validation functionality, XSD can define all the constraints that a DTD can define, and many more. To take a simple example, XSD can say that a particular attribute must be a valid date, or a number, or a list of URIs, or a string that is exactly 8 characters long. To take another example, XSD can define much richer constraints on uniqueness of values within a document.So
XSD provides much more control over the XML then DTD.

Can we supply a XML file having no XSD or DTD associated with it as source?


- Yes.In that case designer will read the tags for the elements,reads each element to determine their datatype and precision,their possible occurences and their position in the hierarchy.

* Mapping designer can create source qualifier from XSD/DTD supplied with the XML file.
* But this determining takes long time if the source XML is large.So its always better to have a XSD or DTD ( internal/external ) associated with that XML.
* Mapping designer can be configured to validate the input XML file as per as the supplied XSD or DTD.

What does "sequence" mean in a complex type XSD?

XSD can be of two types:-
o> Simple type XSD: Having one element inside it only.
Check XSD Simple Elements
o> Complex type XSD: Having more than one element inside that.
Check XSD Complex Elements

Whatever elements have been described inside sequence they should be in the same order in the XML file.

Check the example Check the person element

Sequence is one kind of indicator to tell the XML file how it should have elements in it,in which sequences,how many times a element may occur,are the elements/atrtributes going to appear in the XML as a group or not.

Check XSD Complex Types Indicators


What do you mean by "element type any"?


The "anyAttribute" element enables us to extend the XML document with attributes not specified by the schema.

In that case the XML file may get some more attributes from other XSD except the main XSD associated with it.

Check an example from Wschools
anyAttribute


What is pivoting in INFA?


Sometimes in the source XML we have mulitiple occurence of same elements.Like as in customer.xml file there may be two sets of address for each customer;one for home address another for office address.So in that case we wish to have two different channels towards our target in mapping.So we do pivoting in that case.



---: I am not responsible for any damages happened from the suggestion of my blog :---
Reach me at : m.a.hasim@inbox.com

Labels:

Informatica 7.1:Error LM_36526:signal 6- Unexpected Condition Detected

PROBLEM

Sometimes unexpectedly session terminates contained mapping having router in it.The error is:
LM_36526:signal 6- Unexpected Condition Detected

Warning: Unexpcted condition at: widgfld.cpp: 11


ERROR : LM_36526 [Wed May 11 15:12:40 2005] : (29532|36) Session task instance [s_Some_session]: DTM process [pid = 14102] exited due to signal [6].


SOLUTION

1/
Take the backup of INFA metadata.

2/
Run the following query on INFA metadata and check for results:

SELECT A.SUBJ_NAME ,B.MAPPING_NAME ,C.WIDGET_ID ,C.INSTANCE_NAME
FROM OPB_SUBJECT A ,OPB_MAPPING B ,OPB_WIDGET_INST C
WHERE A.SUBJ_ID=B.SUBJECT_ID AND B.MAPPING_ID =C.MAPPING_ID AND B.VERSION_NUMBER = C.VERSION_NUMBER AND C.WIDGET_TYPE=15 AND B.IS_VISIBLE > 0 AND C.WIDGET_ID IN
(select w.WIDGET_ID from opb_widget_field wf, opb_widget w
where w.widget_type = 15 and w.widget_id = wf.widget_id and w.version_number = wf.version_number and w.is_visible > 0
and wf.widget_fld_prop = 0 and wf.porttype = 2)

3/
If the query returns more than one row then...

4/
CREATE TABLE OPB_WIDGET_FIELD_BCKUP AS SELECT * FORM OPB_WIDGET_FIELD; --Taking bkup

5/
update opb_widget_field set widget_fld_prop =
(select f2.field_id from opb_widget_field f2, opb_widget w
where f2.widget_id = opb_widget_field.widget_id and f2.version_number = opb_widget_field.version_number and f2.widget_id = w.widget_id and f2.version_number = w.version_number and w.is_visible > 0 and w.widget_type = 15 and f2.field_name =
substr(opb_widget_field.field_name,1,length(opb_widget_field.field_name)-1))
where widget_fld_prop=0 and porttype = 2 and exists (select * from opb_widget w where w.widget_type = 15 and w.is_visible > 0 and w.widget_id = opb_widget_field.widget_id and w.version_number = opb_widget_field.version_number)

6/
Run the problematic session again.

---: My blog is not responsible for any damages happened from the suggestion of my blog :---
Reach me at : m_a_hasim@yahoo.com

Labels:

Tuesday, January 09, 2007

Informatica 7.1:Core dump

Sometimes we may get core file into INFA server directory due to unexpected behaviour of INFA processes.

Coredump may occur due to:-
1/ In a KSH you are instructing to run a script in SH.
2/ From 7.1x client you trying to access/execute session of another version (8.x).
3/ the machine where pmserver reside is having less RAM.

Core dump files generally created by the kernel when a specific process tries to access a memory area which is not specified for that by the kernel.

How to debug

1/
$ file core
core: core file from 'pmdtm'
$ adb pmdtm core
adb: warning: Cannot locate unwind table ...
adb: warning: Stack backtrace may fail.
adb> $c
_raise + 0x24
_abort_C + 0x160
abort + 0x1c
Alloca Error: Cannot unwind alloca frame. (UNWIND)
adb> $q
$

---: My blog is not responsible for any damages happened from the suggestion of my blog :---
Reach me at : m_a_hasim@yahoo.com

Labels:

Monday, January 08, 2007

Informatica 7.1:How to upgrade INFA from 7.1 to 8.1

We are to upgrade our INFA from 7.1 to 8.1.

We have chalked out the strategey as follows:

1/
At first we need to back up of our repository by using backup command in pmrep which syntax:

backup
-o #output file name#
-f (overwrite existing output file)
-d #description#
[-b (skip workflow/session logs)]
[-j (skip deploy group history)]
[-q (skip MX data)]

2/
Then to copy the old repository into the new location without copying the content.

3/
Install upgraded version client and server.

4/
From Administration Console create and select Repository Service.
Be alert for database type,cennection info and codepage-->Do not create repository content-->Create
Check all if u are sure-->Enable-->Actions-->Upgrade Contents.
Check for the contents once again.


---: My blog is not responsible for any damages happened from the suggestion of my blog :---
Reach me at : m_a_hasim@yahoo.com

Labels:

Informatica 7.1: All available tasks can be done by pmrep

run
-f #script file name#
[-o #output file name#]
[-e #echo commands#]
[-s #stop at first error#]

connect
-r #repository name#
-n #repository user name#
[-x #repository password# |
-X #repository password environment variable#]
-h #repserver host name#
-o #repserver port number#

backup
-o #output file name#
-f (overwrite existing output file)
-d #description#
[-b (skip workflow/session logs)]
[-j (skip deploy group history)]
[-q (skip MX data)]

updatestatistics

updateemailaddr
-d #folder name#
-s #session name#
-u #success email address#
-f #failure email address#

updatesrvvar
-s #server name#
-v #variable name#
-u #new value#

updatetargprefix
-f #folder name#
-s [#qualifying path#.]#session name#
[-t #target name#]
-p #prefix name#
-n

updatesrcprefix
-f #folder name#
-s [#qualifying path#.]#session name#
[-t #source name#]
-p #prefix name#
-n

updateseqgenvals
-f #folder name#
[-m #mapping name#]
-t #sequence generator name#
[-s #start value#]
[-e #end value#]
[-i #increment by#]
[-c #current value#]

listobjects
-o #object type#
[-t #object subtype#]
[-f #folder name#]
[-c #column separator#]
[-r #end-of-record indicator#]
[-l #end-of-listing indicator#]
[-b #verbose#]

listtablesbysess
-f #folder name#
-s [#qualifying path#.]#session name#
-t #source or target#

listallusers

listallgroups

createuser
-u #repository user name#
[{-p #password#
-c #password again#} |
-P #password environment variable#]
[-d #description#]
[-g #group name#]
[-i #contact info# ] }

registeruser
-u #repository user name#
-l #external login#
[-d #description#]
[-g #group name#]
[-i #contact info# ] }

rmuser
-u #repository user name#

changepasswd
[{-p #new password#
-c #password again#} |
-P #new password environment variable#]

edituser
-u #repository user name#
[{-p #new password#
-c #password again#} |
-P #new password environment variable#]
[ -l #new login# ]
[ -d #new description# ]
[ -e #enabled: yes|no# ]
[ -i #contact info# ]

addusertogroup
-u #repository user name#
-g #group name#

rmuserfromgroup
-u #repository user name#
-g #group name#

creategroup
-g #group name#
[-d #description#]

rmgroup
-g #group name#

listallprivileges

addprivilege
-p #privilege#
{ [-u #repository user name#]
[-g #group name#] }

rmprivilege
-p #privilege#
{ [-u #repository user name#]
[-g #group name#] }

createconnection
-s #relational connection subtype#
-n #connection name#
-u #user name#
[-p #password# |
-P #password environment variable#]
[-c (connect string, required for Oracle, Informix, Db2 and ODBC)]
-l #code page#
[-r (Rollback Segment, valid for Oracle connection only)]
[-e (Environment SQL)]
[-z (Packet Size, valid for Sybase and MS SQL Server connection)]
[-b (Database Name, valid for Sybase, Teradata and MS SQL Server connection)]
[-v (Server Name, valid for Sybase and MS SQL Server connection)]
[-d (Domain Name, valid for MS SQL Server connection only)]
[-t (1 for Trusted Connection, valid for MS SQL Server connection only)]
[-a (Data Source Name, valid for Teradata connection only)]

switchconnection
-o #old connection name#
-n #new connection name#

deleteconnection
-n #relational connection name#
[-f (force delete)]

showconnectioninfo

updateconnection
-t #database type#
-d #database connection name#
-u #new user name#
[-p #new database password# |
-P #new database password environment variable#]
-c #new database connection string#

updateserver
-v #server name#
[-h #new host name#]
[-k #new servername#]
[-o #new port number#]
[-t #new timeout value#]
[-p #new protocol name#]
[-l #new codepage name#]

deleteserver
-v #server name#

addserver
-v #server name#
-h #new host name#
[-o #new port number#]
[-t #new timeout value#]
[-p #new protocol name#]
[-l #new codepage name#]

createFolder
-n #folder name#
[-d #folder description#]
[-o #owner name#]
[-g #group name#]
[-s #shared folder#]
[-p #permissions#]

deleteFolder
-n #folder name#

modifyFolder
-n #folder name#
[-d #folder description#]
[-o #owner name#]
[-g #group name#]
[-s #shared folder#]
[-p #permissions#]
[-r #new name#]

truncatelog
-t #all | #endtime# (MM/DD/YYYY HH24:MI:SS)#
[-f #folder name#]
[-w #workflow name#]

createlabel
-a #label name#
[-c #comments#]

deletelabel
-a #label name#
[-f #force delete#]

applylabel
-a #label name#
{ [-n #object name#
-o #object type#
-t #object subtype#]
[-v #version number]
[-f #folder name#] }
{ [-i #persistent input file#] }
[-d #dependency object types#]
[-p #dependency direction (children, parents, or both)#]
[-s #include pk-fk dependency#]
[-g #across repositories#]
[-m #move label#]
[-c #comments#]

createdeploymentgroup
-p #deployment group name#
[-t #deployment group type (static or dynamic)#]
{ [-q #query name#]
[-u #query type (shared or personal)#] }
[-c #comments#]

deletedeploymentgroup
-p #deployment group name#
[-f #force delete#]

cleardeploymentgroup
-p #deployment group name#
[-f #force clear#]

addtodeploymentgroup
-p #deployment group name#
{ [-n #object name#
-o #object type#
-t #object subtype#]
[-v #version number]
[-f #folder name#] }
{ [-i #persistent input file#] }
[-d #dependency types (all, non-reusable or none)#]

findcheckout
[-o #object type#]
[-f #folder name#]
[-u #all users#]
[-c #column separator]
[-r #end-of-record separator#]
[-l #end-of-listing indicator#]
[-b #verbose#]

checkin
-o #object type#
[-t #object subtype#]
-n #object name#
-f #folder name#
-c #comments#

undocheckout
-o #object type#
[-t #object subtype#]
-n #object name#
-f #folder name#

executequery
-q #query name#
[-t #query type (shared or personal)#]
[-u #output persistent file name#]
[-a #append#]
[-c #column separator]
[-r #end-of-record separator#]
[-l #end-of-listing indicator#]
[-b #verbose#]

listobjectdependencies
{ [-n #object name#
-o #object type#
-t #object subtype#]
[-v #version number]
[-f #folder name#] }
{ [-i #persistent input file#] }
[-d #dependency object types#]
-p #dependency direction (children, parents, or both)#
[-s #include pk-fk dependency#]
[-g #across repositories#]
[-u #output persistent file name#]
[-a #append#]
[-c #column separator]
[-r #end-of-record separator#]
[-l #end-of-listing indicator#]
[-b #verbose#]

deployfolder
-f #folder name#
-c #control file name#
-r #target repository name#
[-n #target repository user name#
[-x #target repository password# |
-X #target repository password environment variable#]
-h #target repserver host name#
-o #target repserver port number#]
[-l #log file name#]

deploydeploymentgroup
-p #deployment group name#
-c #control file name#
-r #target repository name#
[-n #target repository user name#
[-x #target repository password# |
-X #target repository password environment variable#]
-h #target repserver host name#
-o #target repserver port number#]
[-l #log file name#]

objectimport
-i #input xml file name#
-c #control file name#
[-l #log file name#]

objectexport
{ [-n #object name#
-o #object type#
-t #object subtype#]
[-v #version number]
[-f #folder name#] }
{ [-i #persistent input file#] }
[-m #export pk-fk dependency#]
[-s #export objects referred by shortcut#]
[-b #export non-reusable dependents#]
[-r #export reusable dependents#]
-u #xml output file name#
[-l #log file name#]

validate
{ [-n #object name#
-o #object type (mapplet, mapping, session, worklet, workflow)#
[-v #version number]
[-f #folder name#] }
{ [-i #persistent input file#] }
[-s #save upon valid#]
{ [-k #check in upon valid#
-m #check in comments#] }
[-p #output option types (valid, saved, skipped, save_failed, invalid_before, invalid_after, or all)#]
[-u #output persistent file name#]
[-a #append#]
[-c #column separator]
[-r #end-of-record separator#]
[-l #end-of-listing indicator#]
[-b #verbose#]

addrepository
-h #repserver host name#
-o #repserver port number#
[-a #repserver password# |
-A #repserver password environment variable#]
-r #repository name#
-t #database type#
-u #database user name#
[-p #database password# |
-P #database password environment variable#]
[-m (Trusted Connection, valid for Microsoft SQL Server only)]
-c #database connect string#
[-d #code page name#]
[-e #DB2 tablespace name#]

stoprepository
[-a #repserver password# |
-A #repserver password environment variable#]
[-h #hostname#
-o #port number#
-r #repository name#]

enablerepository
-h #hostname#
-o #port number#
[-a #repserver password# |
-A #repserver password environment variable#]
-r #repository name#

disablerepository
-h #hostname#
-o #port number#
[-a #repserver password# |
-A #repserver password environment variable#]
-r #repository name#

register
-r #repository name#
-n #repository user name#
[-x #repository password# |
-X #repository password environment variable#]
[-a #GDR repserver password#
-A #GDR repserver password environment variable#]
[-h #LDR host name# (only if LDR is managed by a different RepServer)
-o #LDR port number# (only if LDR is managed by a different RepServer)]

unregister
-r #repository name#
-n #repository user name#
[-x #repository password# |
-X #repository password environment variable#]
[-a #GDR repserver password#
-A #GDR repserver password environment variable#]
[-h #LDR host name# (only if LDR is managed by a different RepServer)
-o #LDR port number# (only if LDR is managed by a different RepServer)]

notify
-h #hostname#
-o #port number#
[-a #repserver password# |
-A #repserver password environment variable#]
-r #repository name#
-t #notify | broadcast# (message type)
-m #message#

help
help [command].
Print help. If command is specified print help for command, else print help for all commands

cleanup
help completed successfully.

Labels:

Informatica 7.1:How to import/export images into Oracle tables through INFA?

Informatica till 7.1 version does not support images/pdfs/zip files.
But still we can do so by standard Oracle way by calling stored procedures into INFA MAPPING.
Please check for the stored procedures:
Oracle interMedia User's Guide and Reference
From ASK TOM ( Good article )

---: My blog is not responsible for any damages happened from the suggestion of my blog :---
Reach me at : m_a_hasim@yahoo.com

Labels: