2013. február 21., csütörtök

Exchange 2010 CAS / DAG project II.

Some additional infos about the project. this but no. You definitely want to read this first. Having read this you don't want to read anything else.

New mailbox move request.

Here you can see old Legacy (from old Excange 2003) and the new, already migrated mailboxes (to a 2010 MBX server.)  New mailboxes remaines green colored in this window until you don't delete manualy the succesful request, one option below.

CAS1 and CAS2 servers are in one Load Balancing Cluster. (Add it as a "feature".) Easy peasy.

Maybe not.

As you can see we have MBX1 and MBX2 databases. MBX1 mounted on SMBX1.With Move active mailbox database   you can move the db to the failover node, in this example it's called SMBX2.

 A working DAG.

2013. február 20., szerda

two days left at my work

By the end of this week I'm going to be unemployed. Next month I'll build up a home labor with several virtual servers and do some experiments. I'll have the time.

2013. február 18., hétfő

Windows 2003 / Exchange migration to Windows 2008 R2 / Exchange 2010

That's really a difficult scenario.
Let's imagine that we have a working system of two physical server hardwares, one for Windows 2003 and one for Exchange 2003 Server.
We are going to install two new physical server into the domain. Each powered by a Windows 2008R2 and a Hyper-V role installed on it. On these 2 new hosts we are going to install 6 virtual servers. Having done the whole process we remove the old Windows 2003 hosts.

Here come the Windows 2008 R2 installing steps and the process that the two new domain contollers are taking over the FSMO roles from the old Windows 2003 DC

So, we are going to install six new Windows 2008 R2 servers. Two stand for domain controlling and the related roles (called DC1 and DC2) and four to serve Exchange 2010.
2 servers are for CAS (Client Access Server) Failover Array and 2 for Mailbox Servers. As I mentioned (did I?) no storage attached to this hw config.
The first step prior to install Exchange 2010 is to raise the domain and forest functionality level at least to 2003 level. Double check if it's okay. 

In case that shit happens, you should consider this and this. But before a problem occurs, you may want to read this. Be careful with dcpromo /forceremoval on Windows 2003 because you may not be able to login with your domain user after the removal. DO NOT do /forceremoval on a working Exchange server.

[...] Here comes the howto on taking Exchange 2003 database and roles by the new servers [....]

to be continued....

2013. február 15., péntek

Policy routing script doesn't work on Ubuntu

We've recently switched a gateway server from Debian to Ubuntu and to my greatest surprise, my little neat policy routing doesn't work on it.
The issue is almost the same as detailed here
I've found an interesting comment:
Remember that Ubuntu enables reverse path filtering by default. Reverse path filtering works as follow: When the kernel receives a packet (may it be forwarded or not) from an interface A, it will invert the source address and the destination address, and check if the resulting packet should be routed through interface A. If it isn't, the packet is dropped as a address spoofing attempt. For packets received from eth0, this is not a problem. For packets received from eth1, this is also not a problem, because when reversing the source ip address and the destination ip address, the kernel will it the default route in table main. For packets received from eth2, which you do not mark, this is a problem, because the kernel will hit the default route in table main, and consider that these packets should have been received from eth1.

Wow, that would be the cause of the problem. So I added to my existing...

PPP0GW=`ifconfig | grep -A1 ppp0 | tail -1 | cut -d : -f3 | cut -d ' ' -f1`
ip ru add fwmark 2 table adsl
#ip route flush table adsl
ip route add table adsl $PPP0GW dev ppp0 src 188.6.X.X
ip route add table adsl default via $PPP0GW
ip rule add from 188.6.X.X table adsl

....script the following:

echo 1 > /proc/sys/net/ipv4/ip_forward
for f in /proc/sys/net/ipv4/conf/*/rp_filter ; do echo 0 > $f ; done
echo 0 > /proc/sys/net/ipv4/route/flush
ip route flush cashe # just to make sure

Still no good. No solution yet, just wanted to record (and postone) it until next week.

UPDATE it's going to be more and more weird. As you can see in my command
IPTABLES -t mangle -A PREROUTING -i eth1 -p tcp ! -d [SKIP-THIS-IP] -m multiport --dports 80,443,3389,[etc] -j MARK --set-mark 2
we have several ports to redirect. Now, all ports are working except some of them, usually 80 and 443.

UPDATE2: It turned out that we had got a little tricky script here that always echoed a "1" into /proc/sys/net/ipv4/conf/all/rp_filter. All the other interfaces' reverse path filter remained 0 but this "all = 1"  was enough to interrupt the 80/443 ports traffic. God knows why it was related only to them.

2013. február 14., csütörtök

Keepalived and squid as a load balancer

Okay, we have 4 servers to loadbalance and failover the web traffic. The first two servers hold a shared IP address with keepalived:

global_defs {
    notification_email {
    notification_email_from roto@masterkey.com
        smtp_connect_timeout 2
        lvs_id LVS_01

vrrp_instance VI_1      {
                        interface eth1
                        state MASTER
                        virtual_router_id 51
                        priority 99
                authentication {
                    auth_type PASS
                    auth_pass SECRET
                        virtual_ipaddress {
#                       track_script {
#                       chk_haproxy
#                       }
                        notify_master /etc/keepalived/master
                        notify_backup /etc/keepalived/backup


On the slave node, everything is similar but the priority. You can touch anything into the master and backup scripts, e.g. /etc/init.d/squid3 restart (Just to make sure that squid picks up and listens to the shared IP. No. I don't think it makes sense.)

Squid3 runs on both frontend nodes as

cache_peer parent 3128 3130 proxy-only round-robin login=PASSTHRU
cache_peer parent 3128 3130 proxy-only round-robin login=PASSTHRU
dead_peer_timeout 15 seconds
hierarchy_stoplist cgi-bin ? ebolaplay
cache_mem 8 MB
maximum_object_size_in_memory 1 MB
memory_replacement_policy lru
cache deny all
cache_dir null /tmp
logformat squid %ts.%03tu %6tr %>a %Ss/%03Hs %<st %rm %ru %un %Sh/%<A %mt
logformat squidmime %ts.%03tu %6tr %>a %Ss/%03Hs %<st %rm %ru %un %Sh/%<A %mt [%>h] [%<h]
logformat common %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %<st %Ss:%Sh
logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh
access_log /var/log/squid3/access.log combined
cache_store_log /var/log/squid3/store.log
cache_log  /var/log/squid3/cache.log
logfile_rotate 8

Of course and 22 and the backend Squid servers. On these servers, there are nothing special. Caching is set to ON but no use of logging the source IP addresses of the request because they are already containing the frontend server IP address here.

Debian és IBM SAN

Linux foto 2.6.26-2-amd64 #1 SMP Thu Sep 16 15:56:38 UTC 2010 x86_64 GNU/Linux

Must have: firmware-bnx2_0.14+lenny2_all.deb  + firmware-qlogic_0.14+lenny2_all.deb

foto:/etc# cat multipath.conf
defaults {
            polling_interval    30
            failback            immediate
            no_path_retry       5
            rr_min_io           100
            path_checker        tur
            user_friendly_names yes

multipaths {
    multipath {
        wwid 360050768018e03081000000000000038
        alias 1000G
    multipath {
        wwid 360050768018e03081000000000000039
        alias 300G
    multipath {
    wwid 360050768018e03081000000000000045
    alias 400G

#devnode_blacklist {
#        devnode "*"
#        }

#        blacklist {
#        devnode "sdq"
#        }

devices {
    device {
            vendor                   "IBM"
            product                  "2145"
            path_grouping_policy     group_by_prio
            prio_callout             "/sbin/mpath_prio_alua /dev/%n"

Extended lvm, xfs......

   84  lvdisplay
   85  pvdisplay
   86  vgdisplay
   87  lvdisplay
   88  pvs
   89  vgdisplay -v
   90  vgremove kiskepek
   91  vgremove kiskepek
   92  pvcreate /dev/kiskepek/kiskepek-p1
   93  df -h
   94  pvcreate /dev/mapper/300G
   95  pvdisplay
   96  vgextend nagykepek /dev/dm-1
   97  vgdisplay
   98  df -h
   99  mc
  100  vgdisplay
  101  lvdisplay
  102  lvextend /dev/nagykepek/nagykepek-p1 --size=+300G
  103  mc
  104  mount
  105  vgdisplay
  106  df -h
  107  xfs_growfs /www/pix
  108  df -h

2013. február 6., szerda


Quicksort by size
du -shc * | sort -h
change on file group owner if group=x
find . -type f -group X-exec chgrp Y {} \;

I can't remember how it's made but I must have been in a hurry. :)

rm /root/mailbe &> /dev/null
[ -f /root/okk ] && exit 0
for i in Bacs Baranya Bekes Heves;do
 cd /atvitel/pdf/npd/backup/$i
 for j in *`date --date="tomorrow" +%d`*.pdf;do
  cou=`expr $cou + 1`
   mtomb[$k]=`echo ${mtomb[$k]} $cou`
   k=`expr $k + 1`
  for k in `seq 0 8`;do
   pdfek=`echo ${mtomb[$k]}| cut -d ' ' -f 2`
   echo ${mtomb[$k]} >> /root/mailbe
   [ "$pdfek" -ge "16" ] && megye=`echo "$megye + 1 "|bc`
if [ $megye = 9 ];then
  echo "Bacs Baranya Bekes Heves megvan ,OK"|mail -s "goto home" "viktornak@email.hu"
  touch /root/okk
  cat /root/mailbe|mail -s "Megyek kuldese allapot jelentes" "viktornak@email.hu"

CentOS 6.3 on Dell R210

After being able to install a CentOS 6.3 on this little beauty Dell server, I got terrified with this interfaces output.
So, what the hell are these weird names em1 em2 p1p1 p1p2 ? 
Simple (and some kind of overkill) solution: rpm -e biosdevname...