วันเสาร์ที่ 23 ตุลาคม พ.ศ. 2553

rsync

Simple command line

rsync -r -a -v -e "ssh -l root"  /source/dir_to_upload/   xxx.xxx.xxx.xxx:/desination/




How to sync data between 2 servers automatically

http://www.howtomonster.com/2007/08/08/how-to-sync-data-between-2-servers-automatically/
Have you ever wanted to know how to easily synchronize the data between multiple servers automatically?

In this article I’ll explain how to setup 2 Linux servers to automatically synchronize data between a specific directory on each server. To do this we will use rsync, ssh key authentication, and a cron job.


Let’s call the 2 servers ‘SOURCESERVER’ and ‘DESTSERVER’ for

SOURCESERVER = Source server (the server we’re connecting from to upload the data)

DESTSERVER = Destination server (the server we’re connecting to receive the data)





Part 1 - Setting up SSH key authentication



First, we need to make sure the DESTSERVER has the ability to use key authentication enabled. Find your sshd configuration file (usually ‘/etc/ssh/sshd_config’) and enable the following options if they are not already set.



RSAAuthentication yes

PubkeyAuthentication yes

AuthorizedKeysFile .ssh/authorized_keys



If you edit the file be sure to restart sshd afterwards.



# /etc/init.d/sshd restart



Next, on the SOURCESERVER we will create the public / private key pair to be used for authentication with the following command.



# ssh-keygen -t rsa



*Note: Do not enter a passphrase for this, just hit enter when prompted.



This should create 2 files, a public key file and a private key file.

The public key file (usually [homedir]/.ssh/id_rsa.pub) we will upload to the DESTSERVER.

The private key file (usually [homedir]/.ssh/id_rsa) we will keep on the SOURCESERVER.

*Be sure to keep this private key safe. With it anyone will be able to connect to the DESTSERVER that contains the public key.



Now we will plant the public key we created on to the DESTSERVER.

Choose the user account which you will use to connect to on DESTSERVER, we’ll call this user ‘destuser’ for now.

In that account’s home directory, create a ‘.ssh’ subdirectory, and in that directory create a new text file called ‘authorized_keys’. If it already exists, great, use the existing file.

Open the ‘authorized_keys’ file and paste in the contents of the public key you created in the previous step (id_rsa.pub). It should look something like the following



ssh-rsa sourceuser@SOURCESERVER



Save the file and change the permissions to 600 for the file and 700 for the ‘.ssh’ directory.



Now to test that the keys are working.

From the SOURCESERVER try logging in as normal using ssh to the DESTSERVER.



# ssh destuser@DESTSERVER



If all is working you should not be prompted for a password but instead connected directly to a shell on the DESTSERVER.





Part 2 - Creating the rsync script



Now for the rsync script.

I use a simple script such as the following



——————————————-



#!/bin/bash



SOURCEPATH=’/source/directory’

DESTPATH=’/destination’

DESTHOST=’123.123.123.123′

DESTUSER=’destuser’

LOGFILE=’rsync.log’



echo $’\n\n’ >> $LOGFILE

rsync -av –rsh=ssh $SOURCEPATH $DESTUSER@$DESTHOST:$DESTPATH 2>&1 >> $LOGFILE

echo “Completed at: `/bin/date`” >> $LOGFILE



——————————————-



Copy this file into the home directory of the sourceuser on the SOURCESERVER

and modify the first 4 variables in the file.

SOURCEPATH (Source path to be synced)

DESTPATH (Destination path to be synced)

DESTHOST (Destination IP address or host name)

DESTUSER (User on the destination server)

Save it as something like ‘rsync.sh’

Set the permissions on the file to 700.

# chmod 700 rsync.sh



Now you should be able to run the script, have it connect to the DESTSERVER, and transfer the files all without your interaction.

The script will send all output to the ‘rsync.log’ file specified in the script.





Part 3 - Setting up the cron job



Assuming everything has worked so far all that’s left is to setup a cron job to run the script automatically at a predefined interval.



As the same sourceuser use the ‘crontab’ command to create a new cron job.



# crontab -e



This will open an editor where you can schedule the job.

Enter the following to have the script run once every hour



——————————————-

# Run my rsync script once every hour

0 * * * * /path/to/rsync.sh

——————————————-



Your 2 servers should now be syncing the chosen directory once every hour.

Hope this helped, let me know if you have any questions.

วันศุกร์ที่ 22 ตุลาคม พ.ศ. 2553

วันพฤหัสบดีที่ 21 ตุลาคม พ.ศ. 2553

วันศุกร์ที่ 15 ตุลาคม พ.ศ. 2553

วันพฤหัสบดีที่ 14 ตุลาคม พ.ศ. 2553

migrate users on ubuntu

Migrate users from one Linux machine to another


February 14, 2010 By: mildtech Category: software

Share


Have you ever had a need to migrate current running Linux users from installation to another? That would be a simple task if the user count was low. But what happens when the user count is in the hundreds? What do you do then? If you’re not using LDAP, you know you will have to migrate the users’ data, passwords, etc from the old machine to the new. Believe it or not, this is just a matter of a few commands – not necessarily simple commands, but it’s not as complex as you would think.



In this article I am going to show you how to make this migration so your Linux users do not loose their data and their passwords are all retained.



What we migrating



The list is fairly simple:



* /etc/passwd - Contains information about the user.

* /etc/shadow – Contains the encrypted passwords.

* /etc/group – Contains group information.

* /etc/gshadow – Contains group encrypted passwords.

* /var/spool/mail – Contains users email (the location will depend upon the mail server you use).

* /home/ – Contains users data.



Unfortunately these files can not simply be copied from one machine to another – that would be too easy. Just make sure you enter the following commands correctly.

Source machine



These are the commands you will need to run on the machine you are migrating users FROM. I will assume you are doing this on a system that uses a root user (such as Fedora), so all commands will be done as root:



mkdir ~/MOVE



The above command creates a directory to house all of the files to be moved.



export UGIDLIMIT=500



The above command sets the UID filter limit to 500. NOTE: This value will be dictated by your distribution. If you use Red Hat Enterprise Linux, CentOS, or Fedora this value is shown in the command above. If you use Debian or Ubuntu that limit is 1000 (not 500).



awk -v LIMIT=$UGIDLIMIT -F: ‘($3>=LIMIT) && ($3!=65534)’ /etc/passwd > ~/MOVE/passwd.mig



The above command copies only user accounts from /etc/passwd (using awk allows us to ignore system accounts.)



awk -v LIMIT=$UGIDLIMIT -F: ‘($3>=LIMIT) && ($3!=65534)’ /etc/group > ~/MOVE/group.mig



The above command copies the /etc/group file.



awk -v LIMIT=$UGIDLIMIT -F: ‘($3>=LIMIT) && ($3!=65534) {print $1}’ /etc/passwd
tee –
egrep -f – /etc/shadow > ~/MOVE/shadow.mig



The above command copies the /etc/shadow file.



cp /etc/gshadow ~/MOVE/gshadow.mig



The above command copies the /etc/gshadow file.



tar -zcvpf ~/MOVE/home.tar.gz /home



The above command archives /home.



tar -zcvpf ~/MOVE/mail.tar.gz /var/spool/mail



The above command archives the mail directory. NOTE: If you are using Sendmail this is the correct directory. If you are using Postfix that directory most likely will be /etc/postfix.



Now it’s time to move everything in ~/MOVE over to the new server. You can do this using the scp command like so:



scp -r ~/MOVE/* USER@IP_OF_NEW_SERVER:/home/USER/



Where USER is the username you will use to send the file and IP_OF_NEW_SERVER is the address of the new server. NOTE: If this server is not on line yet you can always copy these files onto a thumb drive and move them that way.



Target machine



Now we’re working on the new server. Follow these commands (run as the root user):



mkdir ~/newsusers.bak



The above command will create a new directory that will house the backup of the current users.



cp /etc/passwd /etc/shadow /etc/group /etc/gshadow ~/newsusers.bak



The above command will copy the necessary files to the new backup directory.



cd /PATH/TO/DIRECTORY

cat passwd.mig >> /etc/passwd

cat group.mig >> /etc/group

cat shadow.mig >> /etc/shadow

/bin/cp gshadow.mig /etc/gshadow



The above commands will restore all password files onto the new system. NOTE: Where /PATH/TO/DIRECTORY is the location where you copied the files onto the new system.



cd /

tar -zxvf /PATH/TO/DIRECTORY/home.tar.gz



The above commands will first change you to the / directory and then unpack the archived /home directory. NOTE: Where /PATH/TO/DIRECTORY is the location where you copied the files onto the new system.



cd /

tar -zxvf /PATH/TO/DIRECTORY/mail.tar.gz



The above commands will first change you to the / directory and then unpack the archived/var/spool/mail directory. NOTE: Where /PATH/TO/DIRECTORY is the location where you copied the files onto the new system.



You can now reboot your system with the users in place.

วันอาทิตย์ที่ 3 ตุลาคม พ.ศ. 2553

$ mysql --user=root --password


Enter password: ********

mysql> CREATE USER dotproject IDENTIFIED BY 'dotproject';

Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL ON dotproject.* TO dotproject;

Query OK, 0 rows affected (0.00 sec)

redmine

http://www.redmine.org/wiki/redmine/RedmineInstall

วันเสาร์ที่ 2 ตุลาคม พ.ศ. 2553

collabtive

http://www.howtoforge.com/web-based-project-management-with-collabtive-on-ubuntu7.10-server

Database host: localhost


Database name: collabtive

Database user: collabuser

Database password: collabPW