Monday, May 2, 2016

Unix Command to delete duplicate records of any specified field from a file

You have a file which lists records from different file sources and the file now contains duplicate records.
You need to remove duplicate records and find the unique records coming from each file.

file temp1.txt  contains:

92.168.20.15: file1.txt
72.55.125.255: file3.txt
89.168.12.78: file1.txt
10.12.68.90: file4.txt
72.55.125.255: file2.txt
92.168.20.15: file4.txt
89.168.12.78: file5.txt

Now write the UNIX command to print only the unique IP Addresses.
Output should be:
92.168.20.15
72.55.125.255
89.168.12.78
10.12.68.90

Command:

$ cut -d":" -f1 temp1.txt | uniq

No comments:

Post a Comment