Thursday, June 16, 2005

How To Remove Azesearch Bar

My today is wasted behind the damnnnnn Azesearch.Wat this malware does is its removes teh GOOGLE SEARCH BAR.Its lowers down ur PC performance.Dumps a lot of porno,casino,cheap sites under ur bookmarks.Worst is it redirects u to some different palces where u can get anything.Searching in google becomes painful.It just sends u to the hell "404".And it makes u frightful putting a horrible webpage on your desktop.It puts its own searchbar named AZESEARCH under the addressbar in IE and ofcourse its of no help..I tried to find out why azesearch is so mad on google.But i dint get.Wat I got is that-people loves google.And those who confronted azesearch started hating it.Its really a curse.

I got it when I was finding for a key in net.I got a message to download an activeX component.Becoming confident bout my PC i downloaded it.And its gone.In fact a file named azesearch.ocx gets downloaded into System32 folder.Its the main culprit.Yah,bombarding on it wont help bcz it already done a lot of changes in ur PC.Till then it will make www.azesearch.com as the homepage and also many changes.Even it changes the "host" file in registry.Thats why it can redirects to any "404" pages when we need to open google or yahoo actually.It makes 3-4 dll files also.It creates a XML file in System32 folder which consists the site URL`s which comes under ur bookmark.

To get rid of it i downloaded HijackThis which takes all the changes made in ur account.Then deleted all the items i thought suspicious.But in fact this is for experts only.

For the common users its better to download LAVASOFTS ADWARE program.It will take care of the rest.

Well.The resources to get rid of this nuisance ar below:

** This page describe details how to get rid of it--

http://www.geekstogo.com/forum/You_Must_Read_This_Before_Posting_A_Hijackthis_Log-t2852.html

** These two pages are the EPIC on azesearch bar:
http://www3.ca.com/securityadvisor/pest/pest.aspx?id=453094055
http://www3.ca.com/securityadvisor/pest/pest.aspx?id=453094055

In fact its a very common malware nowadays.in various forums u can get hot discussions on this.But for all those who are yet unaffected,I can suggest to use FIREFOX instead of IE.

M.A.Hasim

Labels:

Tuesday, June 07, 2005

More on "basename" command

Previously I have only told that basename strips off extra part from a filename.And i showed that its stripping off last extentions of test.sh.sh and making it test.sh.But its not the whole picture of basename.Basename can also stip off from the beginning.In fact it just returns the filename.Umm...I can say it tries its best to return the filename only.See the following example.

$ basename /GrandDad/Dad/Son/GrandChild.txt
GrandChild.txt
$


The basename command analyzes the string and takes out the pieces it estimates to be the directory path. Basically, it leaves the last piece of the string that doesn't contain a slash (/). It also removes any trailing slash in order to identify the base portion of the name. In the following listing, the final slash is removed before evaluating the string and returning the word file as the "base" for the string.

$ basename /GrandDad/Dad/Son/
Son
$



The basename utility is very useful for situations in which files are replicated across multiple directories. In a hypothetical system, a series of directories contain the current versions of documents, the newest versions of documents to replace the current versions, and two earlier versions of the same documents where they've been replaced. Assuming that the directories are named new, current, old and oldest, a process is needed to check the names of all the new documents in the new directory. Any documents with the same name in the old directory are moved to the oldest directory, any documents with the same name in the current directory are moved to the old directory, and finally the new documents are moved to the current directory. Using a for name loop, the code to do this would look as follows (the listing is numbered for explanatory purposes):

1 for name in /docs/new/*
2 do
3 fname=`basename $name`
4 if [ -f /docs/old/$fname ]
5 then
6 mv /docs/old/$fname /docs/oldest/$fname
7 fi
8 if [ -f /docs/current $fname ]
9 then
10 mv /docs/current/$fname /docs/old/$fname
11 fi
12 mv /docs/new/$fname /docs/current/$fname
13 done


The basename command is used to extract just the filename portion into the variable fname. From there, the $fname variable is used to construct various tests and copies. The for name loop at line 1 sets the variable $name to each filename that matches /docs/new/* and then executes the logic between lines 2 and 13. At line 3 basename is used to extract just the file portion of the name into the variable $fname. Once the base portion of the filename is identified, a series of tests and moves can be executed. At line 4 the logic tests to see if a file of the same name exists in the /docs/old directory. If it does, it's moved to /docs/oldest at line 6. This is repeated for the /docs/current and docs/old directories at lines 8 through 11. Finally, at line 12 the file in /docs/new is moved into /docs/current. The logic at line 12 doesn't bother testing for the existence of the file because the for name logic started at line 1 has established that something does exists in /docs/new.

This logic assumes that the /docs/new directory will only contain files that are to be installed in /docs/current and will not contain subdirectories.

Removing extensions

I have discussed that already.This is nothing but to brush up knowledge.The basename utility also allows the extension or suffix to be stripped from a file basename. The extension to be stripped is added after the file path. In the following example, .txt is added after the filename. The return from basename is the single word file:

$ basename /this/is/a/file.txt .txt
file
$

To illustrate this hypothetically, assume that documents in the master directory are copied to a holding directory for comments. Any comments are written to a file with the extension .comment. For example, a document named proposal.doc might have a corresponding file named proposal.comment containing any comments on the document.

A process to gather up the comments has two jobs to do. First it must check that comments have been entered for a document and remove the copy of the document and the comments. Its second task is to raise an alert about any documents that have not been commented on. A process to do this is illustrated in the following numbered listing:

1 for name in /docs/for_comment/*.doc
2 do
3 bname=`basename $name .doc`
4 cname=${bname}.comment
5 dname=${bname}.doc
6 if [ -f /docs/for_comment/$cname ]
7 then
8 mv /docs/for_comment/$cname /docs/master/$cname
9 rm -f /docs/for_comment/$dname
9 else
10 echo "No comments received for $dname"
11 fi
12 done

At line 1, the name of every doc file in /docs/for_comment is extracted into the $name variable. At line 2, the filename is stripped of both the directory information, and the file extension. At lines 4 and 5, this root is built up again into two variables containing filename.doc and filename.comment. At line 6, a test is made for a file with a .comment extension. If one is found, the comment file is moved back into the master directory, and the temporary version of the document that was placed in /docs/for_comment is removed. If no comment file is found, a message is displayed on the console that comments are missing for the document. This is repeated for each doc file in /docs/for_comment.

Labels:

TAIL a useful DEBUGGING command in UNIX

The tail command displays the last 10 lines of a file on standard output (usually the terminal). The number can be modified by using -nn, where nn is the number of lines to display. For example tail -20 log.txt will display the last 20 lines of the file log.txt.
If a + is used, the number of lines specified is from the beginning of the file. For example tail +20 log.txt will skip lines 1 through 19 of log.txt and display lines 20 through to the end of the file.
The tail utility is particularly useful for looking at the last lines logged in a log file. It also has a very useful option called the -f flag. If tail is used with a -f flag, the file being displayed is not closed, but is kept open. The tail program sleeps for one second, wakes, and checks to see if more lines have been added to the file. If they have, the new lines are displayed. This option is particularly useful for monitoring a log file that is currently active and being written to.
Assuming that log.txt is a logging file being written to by one or more programs, tail -f log.txt will display the last 10 lines of log.txt, and then update the screen each second with any new records added to log.txt by other programs.
tail -f is an excellent debugging tool. The first time I used it was to monitor a file that was being filled with transactions by cash registers as cashiers rang up purchases. For testing purposes, we set up the register, rang up different types of transactions, and then examined the result on the Unix screen, immediately. In this way we were quickly able to isolate transactions that were being written to the file incorrectly.
Because tail routes its output to standard output, it is possible to pipe the results of tail into another process. At one point in the debugging, we were concerned that the cash registers were writing garbage or nulls to the transaction file. tail processed the files as text, and it was not possible to see nulls in the data. We used tail to pump the data into od (octal dump) and displayed the files in hex, so that we could examine them for nulls. Using tail -f at the beginning of the stream meant that the transaction file was kept open and constantly pumped into od where the bytes were translated into hex and displayed.
Here is the command:
tail -f trx.txt|od -xc
Testing tail
You might want to try the following to test tail. First we will create a process that writes to a log file. Start the Korn shell by typing ksh and pressing Enter.
Type each of the following lines. As you press Enter after each line, the > prompt will appear indicating that more input is expected.
The command listed below creates a process that sleeps for two seconds and then awakens and appends the date and time to log.txt. The process is submitted to the background by enclosing all of the commands in parentheses at lines 1 and 6, and a final ampersand (&) . When you press Enter at the last line, the process is submitted to the background and runs as a detached job with no terminal to write to. It doesn't need a terminal since it is writing to log.txt.
After submitting the job in background, the operating system responds by giving you the job number of the job that is now running in the background. Make a note of the process id number that appears on your terminal, as you will need it later to kill the process. In this example it is 18495.
$ (while true
> do
> sleep 3
> echo `date` >>log.txt
> done
> )&
18495

Now that log.txt is being filled with date and time stamps, enter the following command:
tail -f log.txt
As you watch the screen, log.txt is filled with more information, and tail -f continues to display the information on the screen as in the following example.
$ tail -f log.txt
Tue Jun 7 12:33:14 GMT 2005
Tue Jun 7 12:33:17 GMT 2005
Tue Jun 7 12:33:20 GMT 2005
Tue Jun 7 12:33:23 GMT 2005
Tue Jun 7 12:33:26 GMT 2005
Tue Jun 7 12:33:29 GMT 2005
Tue Jun 7 12:33:32 GMT 2005
Tue Jun 7 12:33:35 GMT 2005
Tue Jun 7 12:33:38 GMT 2005
Tue Jun 7 12:33:41 GMT 2005
Tue Jun 7 12:33:44 GMT 2005
Tue Jun 7 12:33:47 GMT 2005

Press Control-C or the Delete key to stop your tail -f process depending on how your terminal is set up.
Finally you need to stop the background process that is logging to the log.txt file. Using the process id number you noted after the process started type:
$kill 18495

Labels:

Monday, June 06, 2005

Wat is IFS?

IFS is Internal Field Separators (normally space, tab and newline). The cut -d option temporarily changes the IFS for the duration of the cut command.
The shell uses the value stored in IFS, which is the space, tab, and newline characters by default, to delimit words for the read and set commands, when parsing output from command substitution, and when performing variable substitution.

IFS can be redefined to parse one or more lines of data whose fields are not delimited by the default white-space characters. Consider this sequence of variable assignments and for loops:
$
$ test=I_AM_A_GOOD_BOY
$ for i in $test
> do
> echo $i
> done
I_AM_A_GOOD_BOY
$
$
$ OIFS=$IFS
$ IFS=_
$ for i in $test
> do
> echo $i
> done
I
AM
A
GOOD
BOY
$
$IFS=$OIFS


The first command assigns the string “I_AM_A_GOOD_BOY” to the variable named test. You can see from the first for loop that the shell treats the entire string as a single field. This is because the string does not contain a space, tab, or new line character.

After redefining IFS, the second for loop treats the string as four separated fields, each delimited by an underscore.
Notice that the original value of IFS was stored in OIFS (“O” for original) prior to changing its value. After you are finished using the new definition, it would be wise to return it to its original value to avoid unexpected side effects that may surface later on in your script.

TIP – The current value of IFS may be viewed using the following pipeline:
$ echo "$IFS" | od -b
0000000 040 011 012 012
0000004
$

The output of the echo command is piped into the octal dump command, giving you its octal equivalent. You can then use an ASCII table to determine what characters are stored in the variable. Hint: Ignore the first set of zeros and the second newline character (012), which was generated by echo.

Labels:

Friday, June 03, 2005

Examples of interrupt handling in UNIX:Trapping trap.....tap..tap..

Better learn bout trapping at first. Shell procedures may use the trap command to catch or ignore Unix operating system signals. The form of the trap command is:
trap 'command-list' signal-list
Several traps may be in effect at the same time. If multiple signals are received simultaneously, they are serviced in ascending order.wat are the various signals available in UNIX.u can get a list by using the command kill –l


Sample run from system:
$ kill -l
HUP INT QUIT ILL TRAP ABRT EMT FPE KILL BUS
SEGV SYS PIPE ALRM TERM USR1 USR2 CLD PWR WINCH
URG POLL STOP TSTP CONT TTIN TTOU VTALRM PROF XCPU
XFSZ WAITING LWP FREEZE THAW CANCEL LOST RTMIN RTMIN+1 RTMIN+2
RTMIN+3 RTMAX-3 RTMAX-2 RTMAX-1 RTMAX


These are the various signals. And each process is suppressed to follow this signal dada. Some examples like:

HUP-the signal to hangup(also can be numbered as 1)
We often use kill –9 .9 is the number of the signal. And u just count 9 in the upper system-run-list, u`ll get it as kill.

We better give numbered it as the number will be used frequently hereafter.
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL
5) SIGTRAP 6) SIGIOT 7) SIGEMT 8) SIGFPE
9) SIGKILL 10) SIGBUS 11) SIGSEGV 12) SIGSYS
13) SIGPIPE 14) SIGALRM 15) SIGTERM 16) SIGUSR1
17) SIGUSR2 18) SIGCHLD 19) SIGPWR 20) SIGWINCH
21) SIGURG 22) SIGIO 23) SIGSTOP 24) SIGTSTP
25) SIGCONT 26) SIGTTIN 27) SIGTTOU 28) SIGVTALRM
29) SIGPROF


There are some most common signals to be caught.they are as follows:
The following are the signals that are usually caught with the trap command.
0 shell exit (for any reason, including end of file EOF).
1 hangup.
2 interrupt (^C).
3 quit (^\ ; causes program to produce a core dump).
9 kill (cannot be caught or ignored).
15 terminate; default signal generated by kill.


Now take a break and be cool. It’s really interesting to make a surgery on trap. The command list is placed between single quotes, as the command line is scanned twice, once when the shell first encounters the trap command and again when it is being executed.
trap 'command-list' signal-list

The single quotes inhibit immediate command and variable substitution but are stripped off after the first scan, so that the commands are processed when the command is actually executed.
If command-list is not specified, then the action taken on receipt of any signal in the signal-list is reset to the default system action.
If command-list is an explicitly quoted null command (' ' or " "), then the signals in signal-list are ignored by the shell.
The command-list is treated like a subroutine call. The commands in the list are executed when the signal is trapped and control is then returned to the place at which it was interrupted.

Now lets a cool example of trap.Create a shell script named trap as follows:
======================vi trap =====================
trap 'echo "You woke me up!"; exit' 2
echo "Zzzzz..."
sleep 10
echo "Why didn't you wake me up?"

========================================================

The sample run:

$ sh trap
Zzzzz...
^CYou woke me up! ##CNTRL C is pressed
$
$
$ sh trap
Zzzzz...
Why didn't you wake me up? ##no interrupt or CNTRL C…after 10 secs

$

Here wat happens is that at the very first scanning the bactricks are stripped off from that trap line. And later when the line is executing if any interrupt occurs( i.e, CNTRL C),it shows its red eyes “How dare u….!!!You woke me up!”. And even if u do nothing, it`ll later blame u “Why didn't you wake me up?”..


Umm…wat a heck is it?

Type that script.Make it executable and then finally execute it.
======================== vi sorry ==========
while true;
do
trap "echo 'You hit control-C!'" INT
sleep 60
done
============================================


Sample run:

$ sh sorry
^CYou hit control-C!
^CYou hit control-C!


Hahhaha…. Got stuck na…. Sorry…Sorry…That’s why I choosed the file name as sorry…U shud use ur brain na…!!!Btw…umm..if possible use another system. Enter in the same userid/passwd and then kill it. Or if possible then close that window(if u r basically from a WINDOW).Well. Here two notable things are there in the program. Firstly, I used a double quote there and secondly, we use INT in place of any numbers. Using double quote means in the command lists the commands will be substituted in the very first run. And INT is nothing. Check that list which we have generated using kill –l. In fact if we would have used the number 2 in place of that INT, would again stuck u….Hahahha..poor fellow..:P

Hey,here is an improved version of that sorry.Taste it…Well,I wont take any responsibility this time:P.
======================= vi ExtremelySorry ===============
trap "echo 'You hit control-C!'" INT
trap "echo 'You tried to kill me!'" TERM

while true; do
sleep 60
done
==========================================================

Kkkk….Its enough.we`ll talk about more boobytraps later.Byeeeeeeee……..

Labels:

Thursday, June 02, 2005

org.xml.sax.SAXParseException: XML declaration may only begin entities.

its bout a strange exception and the way how i get it resolved.i created a XML file without taking help of any editor like XML spy or et all.the eror org.xml.sax.SAXParseException: XML declaration may only begin entities. was coming while i was trying to parse it using SAX.the fault was that i just started coding that XML file from the second line keeping the first line blank.and the information ?xml version="1.0" encoding="UTF-8"? should must be the first line.yah..i made it as first line.problem resolved.

Labels: