how to delete duplicate files


originally was that now

# Prints out the size and filename of each file found on the path.
# and sort using the filesize as the key, then using uniq to
# only leave filenames with the same size in the pipeline
find . ! -empty -type f -printf "%s '%p'\n" | sort -n | uniq -D -w 1 | \
# Trim off the file size in preparation for next stage
cut -d" " -f2- | \
# Create the checksum for the files of the same size and then sort
xargs md5sum | sort | \
# Strip out any checksums that are unique, leaving only the duplicates
uniq -w32 -d --all-repeated=separate | \
# Strips out the checksum part, just leaving the duplicate filenames
cut -c35-

#You might want to give a size argument to the first find to only report files bigger than a certain size (e.g. 1 megabyte):
#find . -size +1M -type f -printf "%s '%p'\n" .....

that is the script i have fixed a little error -w instead of  -W

as usual you have to save and give it chmod

chmod +x  noduplicate

now you can run it with ./noduplicate

NOTE THAT SCRIPT PRINTS ONLY to remove …

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: