echo $((20.0/7))
zcalc
bc <<< 20+5/2
bc <<< ‘scale=4;20+5/2’
expr 20 + 5
calc 2 + 4
node -pe 20+5/2 # Uses the power of JavaScript, e.g. : node -pe 20+5/Math.PI
echo 20 5 2 / + p | dc
echo 4 k 20 5 2 / + p | dc
perl -E “say 20+5/2”
python -c “print 20+5/2”
python -c “print 20+5/2.0”
clisp -x “(+ 2 2)”
lua -e “print(20+5/2)”
php -r ‘echo 20+5/2;’
ruby -e ‘p 20+5/2’
ruby -e ‘p 20+5/2.0’
guile -c ‘(display (+ 20 (/ 5 2)))’
guile -c ‘(display (+ 20 (/ 5 2.0)))’
slsh -e ‘printf(“%f”,20+5/2)’
slsh -e ‘printf(“%f”,20+5/2.0)’
tclsh <<< ‘puts [expr 20+5/2]’
tclsh <<< ‘puts [expr 20+5/2.0]’
sqlite3 <<< ‘select 20+5/2;’
sqlite3 <<< ‘select 20+5/2.0;’
echo ‘select 1 + 1;’ | sqlite3
psql -tAc ‘select 1+1
R -q -e ‘print(sd(rnorm(1000)))’
r -e ‘cat(pi^2, “\n”)’
r -e ‘print(sum(1:100))’
smjs
jspl
Monthly Archives: September 2013
exim commands
Print a count of the messages in the queue:
exim -bpc
Print a listing of the messages in the queue (time queued, size, message-id, sender, recipient):
exim -bp
Print a summary of messages in the queue (count, volume, oldest, newest, domain, and totals):
exim -bp | exiqsumm
Print what Exim is doing right now:
exiwhat
Test how exim will route a given address:
root@localhost# exim -bt [email protected] [email protected] <-- [email protected] router = localuser, transport = local_delivery root@localhost# exim -bt [email protected] [email protected] router = localuser, transport = local_delivery root@localhost# exim -bt [email protected] router = lookuphost, transport = remote_smtp host mail.remotehost.com [1.2.3.4] MX=0
Run a pretend SMTP transaction from the command line, as if it were coming from the given IP address. This will display Exim’s checks, ACLs, and filters as they are applied. The message will NOT actually be delivered.
exim -bh 192.168.11.22
Display all of Exim’s configuration settings:
exim -bP
Searching the queue with exiqgrep
Exim includes a utility that is quite nice for grepping through the queue, called exiqgrep. Learn it. Know it. Live it. If you’re not using this, and if you’re not familiar with the various flags it uses, you’re probably doing things the hard way, like piping `exim -bp` into awk, grep, cut, or `wc -l`. Don’t make life harder than it already is.
First, various flags that control what messages are matched. These can be combined to come up with a very particular search.
Use -f to search the queue for messages from a specific sender:
exiqgrep -f [luser]@domain
Use -r to search the queue for messages for a specific recipient/domain:
exiqgrep -r [luser]@domain
Use -o to print messages older than the specified number of seconds. For example, messages older than 1 day:
exiqgrep -o 86400 [...]
Use -y to print messages that are younger than the specified number of seconds. For example, messages less than an hour old:
exiqgrep -y 3600 [...]
Use -s to match the size of a message with a regex. For example, 700-799 bytes:
xiqgrep -s '^7..$' [...]
Use -z to match only frozen messages, or -x to match only unfrozen messages.
There are also a few flags that control the display of the output.
Use -i to print just the message-id as a result of one of the above two searches:
exiqgrep -i [ -r | -f ] ...
Use -c to print a count of messages matching one of the above searches:
exiqgrep -c ...
Print just the message-id of the entire queue:
exiqgrep -i
Managing the queue
The main exim binary (/usr/sbin/exim) is used with various flags to make things happen to messages in the queue. Most of these require one or more message-IDs to be specified in the command line, which is where `exiqgrep -i` as described above really comes in handy.
Start a queue run:
exim -q -v
Start a queue run for just local deliveries:
exim -ql -v
Remove a message from the queue:
exim -Mrm <message-id> [ <message-id> ... ]
Freeze a message:
exim -Mf <message-id> [ <message-id> ... ]
Thaw a message:
exim -Mt <message-id> [ <message-id> ... ]
Deliver a message, whether it’s frozen or not, whether the retry time has been reached or not:
exim -M <message-id> [ <message-id> ... ]
Deliver a message, but only if the retry time has been reached:
exim -Mc <message-id> [ <message-id> ... ]
Force a message to fail and bounce as “cancelled by administrator”:
exim -Mg <message-id> [ <message-id> ... ]
Remove all frozen messages:
exiqgrep -z -i | xargs exim -Mrm
Remove all messages older than five days (86400 * 5 = 432000 seconds):
exiqgrep -o 432000 -i | xargs exim -Mrm
Freeze all queued mail from a given sender:
exiqgrep -i -f [email protected] | xargs exim -Mf
View a message’s headers:
exim -Mvh <message-id>
View a message’s body:
exim -Mvb <message-id>
View a message’s logs:
exim -Mvl <message-id>
Add a recipient to a message:
exim -Mar <message-id> <address> [ <address> ... ]
Edit the sender of a message:
exim -Mes <message-id> <address>
fast datbase CDB
What is cdb?
cdb is a fast, reliable, simple package for creating and reading constant databases. Its database structure provides several features:
- Fast lookups: A successful lookup in a large database normally takes just two disk accesses. An unsuccessful lookup takes only one.
- Low overhead: A database uses 2048 bytes, plus 24 bytes per record, plus the space for keys and data.
- No random limits: cdb can handle any database up to 4 gigabytes. There are no other restrictions; records don’t even have to fit into memory. Databases are stored in a machine-independent format.
- Fast atomic database replacement: cdbmake can rewrite an entire database two orders of magnitude faster than other hashing packages.
- Fast database dumps: cdbdump prints the contents of a database in cdbmake-compatible format.
cdb is designed to be used in mission-critical applications like e-mail. Database replacement is safe against system crashes. Readers don’t have to pause during a rewrite.
wget http://cr.yp.to/cdb/cdb-0.75.tar.gz
if you are using centos 6 you should change source code a bit: grep -r “extern int errno” .
vim ./error.hand should add #include <errno.h> instead of extern int errno
make && make setup check
./cdbmake-sv test.cdb test.tmp < /etc/services
./cdbtest < test.cdb
querying database like this: ./cdbget smtp/tcp < test.cdb && echo ‘ ‘ or
./cdbget @25/tcp < test.cdb && echo ‘ ‘
Exim use Smart Host on cPanel
To configure a smart host, create /etc/exim.conf.local on the source server (server1 in this example) and add the following lines. Be sure to change to the hostname or IP of the smart host server.
1
2
3
4
5
6
|
@ROUTERSTART@ smart_route: driver = manualroute domains = !+local_domains transport = remote_smtp route_list = * host.name.of.smart.host.server |
Assuming this server (server1) is a cPanel server, next run /scripts/buildeximconf
and then /scripts/restartsrv_exim
. If not, simply restart your Exim server using normal init scripts.
Smarthost Server Config
Assuming you’re installing the yum version of Exim on a CentOS/RHEL server, you’ll need to make two configuration changes. The first is to allow the IP of the mailserver to relay through the smarthost. Open the configuration at /etc/exim/exim.conf, find the line referenced below and edit it replacing x.x.x.x with your mailserver IP.
1
|
hostlist relay_from_hosts = 127.0.0.1 : x.x.x.x |
Second, you’ll need to tell Exim not to listen only on the localhost address for incoming mail, which is the default. Again find the configuration line below and add a hash (#) in front of it to comment it out.
1
|
local_interfaces = <; 127.0.0.1 ; ::1 |
Save the modified config file and restart Exim on this server.
That’s it; watch the logs for a bit to make sure it’s working! The easiest way is to just tail -f /var/log/exim_mainlog
on both servers and then send a message from server1 to a remote host, and watch for the mail to travel out server2!
omg! OOM
Normally, a user-space program reserves (virtual) memory by calling malloc()
. If the return value is NULL, the program knows that no more memory is available, and can do something appropriate. Most programs will print an error message and exit, some first need to clean up lockfiles or so, and some smarter programs can do garbage collection, or adapt the computation to the amount of available memory. This is life under Unix, and all is well.
Linux on the other hand is seriously broken. It will by default answer “yes” to most requests for memory, in the hope that programs ask for more than they actually need. If the hope is fulfilled Linux can run more programs in the same memory, or can run a program that requires more virtual memory than is available. And if not then very bad things happen.
What happens is that the OOM killer (OOM = out-of-memory) is invoked, and it will select some process and kill it. One holds long discussions about the choice of the victim. Maybe not a root process, maybe not a process doing raw I/O, maybe not a process that has already spent weeks doing some computation. And thus it can happen that one’s emacs is killed when someone else starts more stuff than the kernel can handle. Ach. Very, very primitive.
Of course, the very existence of an OOM killer is a bug.
A typical case: I do umount -a
in a situation where 30000 filesystems are mounted. Now umount
runs out of memory and the kernel log reports
Sep 19 00:33:10 mette kernel: Out of Memory: Killed process 8631 (xterm). Sep 19 00:33:34 mette kernel: Out of Memory: Killed process 9154 (xterm). Sep 19 00:34:05 mette kernel: Out of Memory: Killed process 6840 (xterm). Sep 19 00:34:42 mette kernel: Out of Memory: Killed process 9066 (xterm). Sep 19 00:35:15 mette kernel: Out of Memory: Killed process 9269 (xterm). Sep 19 00:35:43 mette kernel: Out of Memory: Killed process 9351 (xterm). Sep 19 00:36:05 mette kernel: Out of Memory: Killed process 6752 (xterm).
Randomly xterm
windows are killed, until the xterm
window that was X
‘s console is killed. Then X
exits and all user processes die, including the umount
process that caused all this.
OK. This is very bad. People lose long-running processes, lose weeks of computation, just because the kernel is an optimist.
Demo program 1: allocate memory without using it.
#include <stdio.h> #include <stdlib.h> int main (void) { int n = 0; while (1) { if (malloc(1<<20) == NULL) { printf("malloc failure after %d MiB\n", n); return 0; } printf ("got %d MiB\n", ++n); } }
Demo program 2: allocate memory and actually touch it all.
#include <stdio.h> #include <string.h> #include <stdlib.h> int main (void) { int n = 0; char *p; while (1) { if ((p = malloc(1<<20)) == NULL) { printf("malloc failure after %d MiB\n", n); return 0; } memset (p, 0, (1<<20)); printf ("got %d MiB\n", ++n); } }
Demo program 3: first allocate, and use later.
#include <stdio.h> #include <string.h> #include <stdlib.h> #define N 10000 int main (void) { int i, n = 0; char *pp[N]; for (n = 0; n < N; n++) { pp[n] = malloc(1<<20); if (pp[n] == NULL) break; } printf("malloc failure after %d MiB\n", n); for (i = 0; i < n; i++) { memset (pp[i], 0, (1<<20)); printf("%d\n", i+1); } return 0; }
Typically, the first demo program will get a very large amount of memory before malloc()
returns NULL
. The second demo program will get a much smaller amount of memory, now that earlier obtained memory is actually used. The third program will get the same large amount as the first program, and then is killed when it wants to use its memory. (On a well-functioning system, like Solaris, the three demo programs obtain the same amount of memory and do not crash but see malloc()
return NULL
.)
For example:
- On an 8 MiB machine without swap running 1.2.11:
demo1: 274 MiB, demo2: 4 MiB, demo3: 270 / oom after 1 MiB:Killed.
- Idem, with 32 MiB swap:
demo1: 1528 MiB, demo2: 36 MiB, demo3: 1528 / oom after 23 MiB:Killed.
- On a 32 MiB machine without swap running 2.0.34:
demo1: 1919 MiB, demo2: 11 MiB, demo3: 1919 / oom after 4 MiB:Bus error.
- Idem, with 62 MiB swap:
demo1: 1919 MiB, demo2: 81 MiB, demo3: 1919 / oom after 74 MiB: The machine hangs. After several seconds:Out of memory for bash.
Out of memory for crond. Bus error.
- On a 256 MiB machine without swap running 2.6.8.1:
demo1: 2933 MiB, demo2: after 98 MiB:Killed.
Also:Out of Memory: Killed process 17384 (java_vm).
demo3: 2933 / oom after 135 MiB:Killed.
- Idem, with 539 MiB swap:
demo1: 2933 MiB, demo2: after 635 MiB:Killed.
demo3: oom after 624 MiB:Killed.
Enabling greylisting with directadmin using postgrey
You should download yumdownloader postgrey the you should extract this rpm and take some files for usage:
rpm2cpio postgrey-1.34-4.fc18.noarch.rpm | cpio -idv
the you can need copy some postgrey configuration and executable files from ./usr/sbin:
./usr/sbin/postgrey
./usr/sbin/postgreyreport
to /usr/local/sbin
./etc/postfix/postgrey_whitelist_clients.local
./etc/postfix/postgrey_whitelist_recipients
./etc/postfix/postgrey_whitelist_clients
to /etc folder
then you need create postgrey working folder for postgrey database
mkdir /var/spool/exim/postgrey && chown mailnull.mail /var/spool/exim/postgrey
to start postgrey you can like this:
/usr/local/sbin/postgrey -d –unix=/var/spool/exim/postgrey/socket –exim –syslog-facility=local6 –user=mailnull –group=mail –dbdir=/var/spool/exim/postgrey –delay=60 –max-age=35 –retry-window=12h –greylist-text=Greylisted. Please, try again later. –whitelist-clients=/etc/postgrey_whitelist_clients –whitelist-recipients=/etc/postgrey_whitelist_recipients –whitelist-clients=/etc/postgrey_whitelist_clients.local –auto-whitelist-clients=5
if you can start, you maybe missing some dependencies like:
yum install perl-BerkeleyDB perl-Net-DNS perl-Net-Server perl-Digest-HMAC perl-IO-Multiplex perl-Digest-SHA1
You should create new ACL rule in your exim.conf
begin acl
# ACL that is used after the RCPT command
check_recipient:
# postgrey [TOP]
defer
log_message = greylisted host $sender_host_address
!senders = : postmaster@*
# domains = +local_domains : +relay_to_domains
!hosts = /etc/virtual/domains
!authenticated = *
verify = recipient/callout=20s,use_sender,defer_ok
set acl_m3 = request=smtpd_access_policy\n\
protocol_state=RCPT\n\
protocol_name=${uc:$received_protocol}\n\
instance=${acl_m2}\n\
helo_name=${sender_helo_name}\n\
client_address=${substr_-3:${mask:$sender_host_address/27}}\n\
client_name=${sender_host_name}\n\
sender=${sender_address}\n\
recipient=$local_part@$domain\n\n
set acl_m3 = ${sg{\
${readsocket{ /postgrey/socket/full_address }{$acl_m3}\
{5s}{}{action=DUNNO}}\
}{action=}{}}
message = ${sg{$acl_m3}{^\\w+\\s*}{}}
condition = ${if eq{${uc:${substr{0}{5}{$acl_m3}}}}{DEFER}{true}{false}}
# add “greylisted by ..seconds” header to mail which has successfully
# passed the greylisting.
warn
!senders = : postmaster@*
# domains = +local_domains : +relay_to_domains
!hosts = /etc/virtual/domains
!authenticated = *
message = ${sg{$acl_m3}{^\\w+\\s*}{}}
condition = ${if eq{${uc:${substr_0_7:$acl_m3}}}{PREPEND}{true}{false}}
# postgrey [END]
# to block certain wellknown exploits, Deny for local domains if
# local parts begin with a dot or contain @ % ! / |
deny domains = +local_domains
local_parts = ^[.] : ^.*[@%!/|]
After this you can restart you exim server and check if you exim use greylisting.
bash: scp: command not found
It usually means that source side is missing an open source SSH client applications:
openssh-clients
One more free just ping website
You can try http://itools.com/tool/just-ping or http://ping.ms/
wordpress this type of file is not permitted for security reasons
You can fix this by editing wp-includes/functions.php then you should find php function wp_get_mime_types() and add additional array values, for example:
‘sh’ => ‘application/sh’, or some others file extensions
OpenVZ create VPS script
This bash script is useful to create Centos or other new VPS in few seconds. You can download it cr_vm.
Source below:
#!/bin/bash
if [ -z "$2" ]; then
echo usage: $0 ctid ipaddr
echo example: 521 192.168.122.152
exit
fi
if [ -f /vz/template/cache/centos-6-x86_64-20130522.tar.xz ]; then
echo "OK"
else
echo "================================================================"
echo "Download a Centos (6.0) template"
echo "================================================================"
wget http://mirror.duomenucentras.lt/openvz/contrib/template/precreated/centos-6-x86_64-20130522.tar.xz -O /vz/template/cache/centos-6-x86_64-20130522.tar.xz
fi
echo "================================================================"
echo "Create a new container named $1"
echo "================================================================"
vzctl create $1 --ostemplate centos-6-x86_64-20130522
echo "================================================================"
echo "Set the hostname"
echo "================================================================"
vzctl set $1 --hostname $1 --save
echo "================================================================"
echo "Set the IP address"
echo "================================================================"
vzctl set $1 --ipadd $2 --save
echo "================================================================"
echo "Set OpenDNS servers 208.67.222.222 and 208.67.220.220"
echo "================================================================"
vzctl set $1 --nameserver 208.67.222.222 --nameserver 208.67.220.220 --save
echo "================================================================"
echo "Set ROOT user password"
echo "================================================================"
vzctl set $1 --userpasswd root:plainpass
echo "================================================================"
echo "Stop and start the container named $1 and wait 10 secs"
echo "================================================================"
vzctl stop $1 && vzctl start $1 && sleep 10
echo "================================================================"
echo "Ping test to google.com"
echo "================================================================"
vzctl exec $1 ping -c 3 google.com
echo "================================================================"
echo "Restarting the node $1"
echo "================================================================"
vzctl restart $1
echo "================================================================"
echo "Test command 'ps aux' executed in the node $1"
echo "================================================================"
vzctl exec $1 ps aux
You can edit this script for your needs.
KVM enable console
If you did default Centos OS installation, you may be missing console access from virsh virtual machine administration.
If you have started VM from virsh like this:
start –console VM and you see only:
Connected to domain VM_NAME
Escape character is ^]
you should change your VM default configuration like this:
vi /etc/init/ttyS0.conf
# ttyS0 – agetty
#
# This script starts a agetty on ttyS0
stop on runlevel [S016]
start on runlevel [23]
respawn
exec agetty -h -L -w /dev/ttyS0 115200 vt102
and finish initctl start ttyS0
you also can change a bit you grub.conf file a bit:
grubby –update-kernel=ALL –args=’console=ttyS0,115200n8 console=tty0′
if you will add this kernel commands, you will see kernel messages when your system is booting, but its not necessary.
If you can access console as root user you should add this:
echo “ttyS0” >> /etc/securetty
IPADDR_START
cd /etc/sysconfig/network-scripts
ls ifcfg-eth0-range*
If you already have a range file, you will need to create a new one for the new range of IPs you are adding, eg ‘nano ifcfg-eth1-range1` . If you have one named range1, name the next range2 and so on.
ifcfg-eth0-range1
Place the following text in the file:
IPADDR_START=192.168.0.10
IPADDR_END=192.168.0.110
CLONENUM_START=0
Note: CLONENUM_START defines where the alias will start. If this is the second range file, you will need to set CLONENUM_START to a value higher than the number of IP addresses assigned. To check what you currently have used, you can run ‘ifconfig –a | grep eth0’. This will list devices such as eth0:0, eth0:1, eth0:2, and so on. If you are currently using upto eth0:16, you will need to set CLONENUM_START to 17 to assign the IPs correctly.
compare php versions from old server and new one
ssh -l root IP_address_old_server php -i > php_versions.txt
ssh -l root IP_address_new_one php -i >> php_versions.txt
and now you can analyze outputs:
cat php_versions.txt | sort | uniq -u
git cheat sheet
Setup ----- git clone <repo> clone the repository specified by <repo>; this is similar to "checkout" in some other version control systems such as Subversion and CVS Add colors to your ~/.gitconfig file: [color] ui = auto [color "branch"] current = yellow reverse local = yellow remote = green [color "diff"] meta = yellow bold frag = magenta bold old = red bold new = green bold [color "status"] added = yellow changed = green untracked = cyan Highlight whitespace in diffs [color] ui = true [color "diff"] whitespace = red reverse [core] whitespace=fix,-indent-with-non-tab,trailing-space,cr-at-eol Add aliases to your ~/.gitconfig file: [alias] st = status ci = commit br = branch co = checkout df = diff dc = diff --cached lg = log -p lol = log --graph --decorate --pretty=oneline --abbrev-commit lola = log --graph --decorate --pretty=oneline --abbrev-commit --all ls = ls-files # Show files ignored by git: ign = ls-files -o -i --exclude-standard Configuration ------------- git config -e [--global] edit the .git/config [or ~/.gitconfig] file in your $EDITOR git config --global user.name 'John Doe' git config --global user.email [email protected] sets your name and email for commit messages git config branch.autosetupmerge true tells git-branch and git-checkout to setup new branches so that git-pull(1) will appropriately merge from that remote branch. Recommended. Without this, you will have to add --track to your branch command or manually merge remote tracking branches with "fetch" and then "merge". git config core.autocrlf true This setting tells git to convert the newlines to the system's standard when checking out files, and to LF newlines when committing in git config --list To view all options git config apply.whitespace nowarn To ignore whitespace You can add "--global" after "git config" to any of these commands to make it apply to all git repos (writes to ~/.gitconfig). Info ---- git reflog Use this to recover from *major* mess ups! It's basically a log of the last few actions and you might have luck and find old commits that have been lost by doing a complex merge. git diff show a diff of the changes made since your last commit to diff one file: "git diff -- <filename>" to show a diff between staging area and HEAD: `git diff --cached` git status show files added to the staging area, files with changes, and untracked files git log show recent commits, most recent on top. Useful options: --color with color --graph with an ASCII-art commit graph on the left --decorate with branch and tag names on appropriate commits --stat with stats (files changed, insertions, and deletions) -p with full diffs --author=foo only by a certain author --after="MMM DD YYYY" ex. ("Jun 20 2008") only commits after a certain date --before="MMM DD YYYY" only commits that occur before a certain date --merge only the commits involved in the current merge conflicts git log <ref>..<ref> show commits between the specified range. Useful for seeing changes from remotes: git log HEAD..origin/master # after git remote update git show <rev> show the changeset (diff) of a commit specified by <rev>, which can be any SHA1 commit ID, branch name, or tag (shows the last commit (HEAD) by default) also to show the contents of a file at a specific revision, use git show <rev>:<filename> this is similar to cat-file but much simpler syntax. git show --name-only <rev> show only the names of the files that changed, no diff information. git blame <file> show who authored each line in <file> git blame <file> <rev> show who authored each line in <file> as of <rev> (allows blame to go back in time) git gui blame really nice GUI interface to git blame git whatchanged <file> show only the commits which affected <file> listing the most recent first E.g. view all changes made to a file on a branch: git whatchanged <branch> <file> | grep commit | \ colrm 1 7 | xargs -I % git show % <file> this could be combined with git remote show <remote> to find all changes on all branches to a particular file. git diff <commit> head path/to/fubar show the diff between a file on the current branch and potentially another branch git diff --cached [<file>] shows diff for staged (git-add'ed) files (which includes uncommitted git cherry-pick'ed files) git ls-files list all files in the index and under version control. git ls-remote <remote> [HEAD] show the current version on the remote repo. This can be used to check whether a local is required by comparing the local head revision. Adding / Deleting ----------------- git add <file1> <file2> ... add <file1>, <file2>, etc... to the project git add <dir> add all files under directory <dir> to the project, including subdirectories git add . add all files under the current directory to the project *WARNING*: including untracked files. git rm <file1> <file2> ... remove <file1>, <file2>, etc... from the project git rm $(git ls-files --deleted) remove all deleted files from the project git rm --cached <file1> <file2> ... commits absence of <file1>, <file2>, etc... from the project Ignoring --------- Option 1: Edit $GIT_DIR/.git/info/exclude. See Environment Variables below for explanation on $GIT_DIR. Option 2: Add a file .gitignore to the root of your project. This file will be checked in. Either way you need to add patterns to exclude to these files. Staging ------- git add <file1> <file2> ... git stage <file1> <file2> ... add changes in <file1>, <file2> ... to the staging area (to be included in the next commit git add -p git stage --patch interactively walk through the current changes (hunks) in the working tree, and decide which changes to add to the staging area. git add -i git stage --interactive interactively add files/changes to the staging area. For a simpler mode (no menu), try `git add --patch` (above) Unstaging --------- git reset HEAD <file1> <file2> ... remove the specified files from the next commit Committing ---------- git commit <file1> <file2> ... [-m <msg>] commit <file1>, <file2>, etc..., optionally using commit message <msg>, otherwise opening your editor to let you type a commit message git commit -a commit all files changed since your last commit (does not include new (untracked) files) git commit -v commit verbosely, i.e. includes the diff of the contents being committed in the commit message screen git commit --amend edit the commit message of the most recent commit git commit --amend <file1> <file2> ... redo previous commit, including changes made to <file1>, <file2>, etc... Branching --------- git branch list all local branches git branch -r list all remote branches git branch -a list all local and remote branches git branch <branch> create a new branch named <branch>, referencing the same point in history as the current branch git branch <branch> <start-point> create a new branch named <branch>, referencing <start-point>, which may be specified any way you like, including using a branch name or a tag name git push <repo> <start-point>:refs/heads/<branch> create a new remote branch named <branch>, referencing <start-point> on the remote. Repo is the name of the remote. Example: git push origin origin:refs/heads/branch-1 Example: git push origin origin/branch-1:refs/heads/branch-2 Example: git push origin branch-1 ## shortcut git branch --track <branch> <remote-branch> create a tracking branch. Will push/pull changes to/from another repository. Example: git branch --track experimental origin/experimental git branch --set-upstream <branch> <remote-branch> (As of Git 1.7.0) Make an existing branch track a remote branch Example: git branch --set-upstream foo origin/foo git branch -d <branch> delete the branch <branch>; if the branch you are deleting points to a commit which is not reachable from the current branch, this command will fail with a warning. git branch -r -d <remote-branch> delete a remote-tracking branch. Example: git branch -r -d wycats/master git branch -D <branch> even if the branch points to a commit not reachable from the current branch, you may know that that commit is still reachable from some other branch or tag. In that case it is safe to use this command to force git to delete the branch. git checkout <branch> make the current branch <branch>, updating the working directory to reflect the version referenced by <branch> git checkout -b <new> <start-point> create a new branch <new> referencing <start-point>, and check it out. git push <repository> :<branch> removes a branch from a remote repository. Example: git push origin :old_branch_to_be_deleted git co <branch> <path to new file> Checkout a file from another branch and add it to this branch. File will still need to be added to the git branch, but it's present. Eg. git co remote_at_origin__tick702_antifraud_blocking ..../...nt_elements_for_iframe_blocked_page.rb git show <branch> -- <path to file that does not exist> Eg. git show remote_tick702 -- path/to/fubar.txt show the contents of a file that was created on another branch and that does not exist on the current branch. git show <rev>:<repo path to file> Show the contents of a file at the specific revision. Note: path has to be absolute within the repo. Merging ------- git merge <branch> merge branch <branch> into the current branch; this command is idempotent and can be run as many times as needed to keep the current branch up-to-date with changes in <branch> git merge <branch> --no-commit merge branch <branch> into the current branch, but do not autocommit the result; allows you to make further tweaks git merge <branch> -s ours merge branch <branch> into the current branch, but drops any changes in <branch>, using the current tree as the new tree Cherry-Picking -------------- git cherry-pick [--edit] [-n] [-m parent-number] [-s] [-x] <commit> selectively merge a single commit from another local branch Example: git cherry-pick 7300a6130d9447e18a931e898b64eefedea19544 git hash-object <file-path> get the blob of some file whether it is in a repository or not Find the commit in the repository that contains the file blob: obj_blob="$1" git log --pretty=format:'%T %h %s' \ | while read tree commit subject ; do if git ls-tree -r $tree | grep -q "$obj_blob" ; then echo $commit "$subject" fi done Squashing --------- WARNING: "git rebase" changes history. Be careful. Google it. git rebase --interactive HEAD~10 (then change all but the first "pick" to "squash") squash the last 10 commits into one big commit Conflicts --------- git mergetool work through conflicted files by opening them in your mergetool (opendiff, kdiff3, etc.) and choosing left/right chunks. The merged result is staged for commit. For binary files or if mergetool won't do, resolve the conflict(s) manually and then do: git add <file1> [<file2> ...] Once all conflicts are resolved and staged, commit the pending merge with: git commit Sharing ------- git fetch <remote> update the remote-tracking branches for <remote> (defaults to "origin"). Does not initiate a merge into the current branch (see "git pull" below). git pull fetch changes from the server, and merge them into the current branch. Note: .git/config must have a [branch "some_name"] section for the current branch, to know which remote-tracking branch to merge into the current branch. Git 1.5.3 and above adds this automatically. git push update the server with your commits across all branches that are *COMMON* between your local copy and the server. Local branches that were never pushed to the server in the first place are not shared. git push origin <branch> update the server with your commits made to <branch> since your last push. This is always *required* for new branches that you wish to share. After the first explicit push, "git push" by itself is sufficient. git push origin <branch>:refs/heads/<branch> E.g. git push origin twitter-experiment:refs/heads/twitter-experiment Which, in fact, is the same as git push origin <branch> but a little more obvious what is happening. Reverting --------- git revert <rev> reverse commit specified by <rev> and commit the result. This does *not* do the same thing as similarly named commands in other VCS's such as "svn revert" or "bzr revert", see below git checkout <file> re-checkout <file>, overwriting any local changes git checkout . re-checkout all files, overwriting any local changes. This is most similar to "svn revert" if you're used to Subversion commands Fix mistakes / Undo ------------------- git reset --hard abandon everything since your last commit; this command can be DANGEROUS. If merging has resulted in conflicts and you'd like to just forget about the merge, this command will do that. git reset --hard ORIG_HEAD or git reset --hard origin/master undo your most recent *successful* merge *and* any changes that occurred after. Useful for forgetting about the merge you just did. If there are conflicts (the merge was not successful), use "git reset --hard" (above) instead. git reset --soft HEAD^ forgot something in your last commit? That's easy to fix. Undo your last commit, but keep the changes in the staging area for editing. git commit --amend redo previous commit, including changes you've staged in the meantime. Also used to edit commit message of previous commit. Plumbing -------- test <sha1-A> = $(git merge-base <sha1-A> <sha1-B>) determine if merging sha1-B into sha1-A is achievable as a fast forward; non-zero exit status is false. Stashing -------- git stash git stash save <optional-name> save your local modifications to a new stash (so you can for example "git svn rebase" or "git pull") git stash apply restore the changes recorded in the stash on top of the current working tree state git stash pop restore the changes from the most recent stash, and remove it from the stack of stashed changes git stash list list all current stashes git stash show <stash-name> -p show the contents of a stash - accepts all diff args git stash drop [<stash-name>] delete the stash git stash clear delete all current stashes Remotes ------- git remote add <remote> <remote_URL> adds a remote repository to your git config. Can be then fetched locally. Example: git remote add coreteam git://github.com/wycats/merb-plugins.git git fetch coreteam git push <remote> :refs/heads/<branch> delete a branch in a remote repository git push <remote> <remote>:refs/heads/<remote_branch> create a branch on a remote repository Example: git push origin origin:refs/heads/new_feature_name git push <repository> +<remote>:<new_remote> replace a <remote> branch with <new_remote> think twice before do this Example: git push origin +master:my_branch git remote prune <remote> prune deleted remote-tracking branches from "git branch -r" listing git remote add -t master -m master origin git://example.com/git.git/ add a remote and track its master git remote show <remote> show information about the remote server. git checkout -b <local branch> <remote>/<remote branch> Eg.: git checkout -b myfeature origin/myfeature git checkout -b myfeature remotes/<remote>/<branch> Track a remote branch as a local branch. It seems that somtimes an extra 'remotes/' is required, to see the exact branch name, 'git branch -a'. git pull <remote> <branch> git push For branches that are remotely tracked (via git push) but that complain about non-fast forward commits when doing a git push. The pull synchronizes local and remote, and if all goes well, the result is pushable. git fetch <remote> Retrieves all branches from the remote repository. After this 'git branch --track ...' can be used to track a branch from the new remote. Submodules ---------- git submodule add <remote_repository> <path/to/submodule> add the given repository at the given path. The addition will be part of the next commit. git submodule update [--init] Update the registered submodules (clone missing submodules, and checkout the commit specified by the super-repo). --init is needed the first time. git submodule foreach <command> Executes the given command within each checked out submodule. Removing submodules 1. Delete the relevant line from the .gitmodules file. 2. Delete the relevant section from .git/config. 3. Run git rm --cached path_to_submodule (no trailing slash). 4. Commit and delete the now untracked submodule files. Updating submodules To update a submodule to a new commit: 1. update submodule: cd <path to submodule> git pull 2. commit the new version of submodule: cd <path to toplevel> git commit -m "update submodule version" 3. check that the submodule has the correct version git submodule status If the update in the submodule is not committed in the main repository, it is lost and doing git submodule update will revert to the previous version. Patches ------- git format-patch HEAD^ Generate the last commit as a patch that can be applied on another clone (or branch) using 'git am'. Format patch can also generate a patch for all commits using 'git format-patch HEAD^ HEAD' All page files will be enumerated with a prefix, e.g. 0001 is the first patch. git format-patch <Revision>^..<Revision> Generate a patch for a single commit. E.g. git format-patch d8efce43099^..d8efce43099 Revision does not need to be fully specified. git am <patch file> Applies the patch file generated by format-patch. git diff --no-prefix > patchfile Generates a patch file that can be applied using patch: patch -p0 < patchfile Useful for sharing changes without generating a git commit. Tags ---- git tag -l Will list all tags defined in the repository. git co <tag_name> Will checkout the code for a particular tag. After this you'll probably want to do: 'git co -b <some branch name>' to define a branch. Any changes you now make can be committed to that branch and later merged. Archive ------- git archive master | tar -x -C /somewhere/else Will export expanded tree as tar archive at given path git archive master | bzip2 > source-tree.tar.bz2 Will export archive as bz2 git archive --format zip --output /full/path master Will export as zip Git Instaweb ------------ git instaweb --httpd=webrick [--start | --stop | --restart] Environment Variables --------------------- GIT_AUTHOR_NAME, GIT_COMMITTER_NAME Your full name to be recorded in any newly created commits. Overrides user.name in .git/config GIT_AUTHOR_EMAIL, GIT_COMMITTER_EMAIL Your email address to be recorded in any newly created commits. Overrides user.email in .git/config GIT_DIR Location of the repository to use (for out of working directory repositories) GIT_WORKING_TREE Location of the Working Directory - use with GIT_DIR to specifiy the working directory root or to work without being in the working directory at all. Changing history ---------------- Change author for all commits with given name git filter-branch --commit-filter ' if [ "$GIT_COMMITTER_NAME" = "<Old Name>" ]; then GIT_COMMITTER_NAME="<New Name>"; GIT_AUTHOR_NAME="<New Name>"; GIT_COMMITTER_EMAIL="<New Email>"; GIT_AUTHOR_EMAIL="<New Email>"; git commit-tree "$@"; else git commit-tree "$@"; fi' HEAD
Bcache
Bcache is a Linux kernel block layer cache. It allows one or more fast disk drives such as flash-based solid state drives (SSDs) to act as a cache for one or more slower hard disk drives.
Hard drives are cheap and big, SSDs are fast but small and expensive. Wouldn’t it be nice if you could transparently get the advantages of both? With Bcache, you can have your cake and eat it too.
Bcache patches for the Linux kernel allow one to use SSDs to cache other block devices. It’s analogous to L2Arc for ZFS, but Bcache also does writeback caching (besides just write through caching), and it’s filesystem agnostic. It’s designed to be switched on with a minimum of effort, and to work well without configuration on any setup. By default it won’t cache sequential IO, just the random reads and writes that SSDs excel at. It’s meant to be suitable for desktops, servers, high end storage arrays, and perhaps even embedded.
The design goal is to be just as fast as the SSD and cached device (depending on cache hit vs. miss, and writethrough vs. writeback writes) to within the margin of error. It’s not quite there yet, mostly for sequential reads. But testing has shown that it is emphatically possible, and even in some cases to do better – primarily random writes.
It’s also designed to be safe. Reliability is critical for anything that does writeback caching; if it breaks, you will lose data. Bcache is meant to be a superior alternative to battery backed up raid controllers, thus it must be reliable even if the power cord is yanked out. It won’t return a write as completed until everything necessary to locate it is on stable storage, nor will writes ever be seen as partially completed (or worse, missing) in the event of power failure. A large amount of work has gone into making this work efficiently.
Bcache is designed around the performance characteristics of SSDs. It’s designed to minimize write inflation to the greatest extent possible, and never itself does random writes. It turns random writes into sequential writes – first when it writes them to the SSD, and then with writeback caching it can use your SSD to buffer gigabytes of writes and write them all out in order to your hard drive or raid array. If you’ve got a RAID6, you’re probably aware of the painful random write penalty, and the expensive controllers with battery backup people buy to mitigate them. Now, you can use Linux’s excellent software RAID and still get fast random writes, even on cheap hardware.
Features
- A single cache device can be used to cache an arbitrary number of backing devices, and backing devices can be attached and detached at runtime, while mounted and in use (they run in passthrough mode when they don’t have a cache).
- Recovers from unclean shutdown – writes are not completed until the cache is consistent with respect to the backing device (Internally, bcache doesn’t distinguish between clean and unclean shutdown).
- Barriers/cache flushes are handled correctly.
- Writethrough, writeback and writearound.
- Detects and bypasses sequential IO (with a configurable threshold, and can be disabled).
- Throttles traffic to the SSD if it becomes congested, detected by latency to the SSD exceeding a configurable threshold (useful if you’ve got one SSD for many disks).
- Readahead on cache miss (disabled by default).
- Highly efficient writeback implementation; dirty data is always written out in sorted order, and if writeback_percent is enabled background writeback is smoothly throttled with a PD controller to keep around that percentage of the cache dirty.
- Very high performance b+ tree – bcache is capable of around 1M iops on random reads, if your hardware is fast enough.
- Stable – in production use now.