Oct 17

OK, so you broke your freebsd /var/db/pkg directory and want to recover it from a backup.

backups look like this:
root@websrv:/var/db/pkg # ls -l /var/backups/
total 19190
-rw-r–r– 1 root wheel 1690 Jul 10 2014 aliases.bak
-rw-r–r– 1 root wheel 475 Jul 1 19:30 group.bak
-rw——- 1 root wheel 1937 Sep 7 10:25 master.passwd.bak
-rw——- 1 root wheel 1954 Aug 23 18:34 master.passwd.bak2
-rw-r–r– 1 root wheel 2429640 Oct 16 03:20 pkg.sql.xz
-rw-r–r– 1 root wheel 2429640 Oct 15 03:01 pkg.sql.xz.1
-rw-r–r– 1 root wheel 2429640 Oct 14 03:09 pkg.sql.xz.2
-rw-r–r– 1 root wheel 2429640 Oct 13 03:01 pkg.sql.xz.3
-rw-r–r– 1 root wheel 2429640 Oct 12 03:01 pkg.sql.xz.4
-rw-r–r– 1 root wheel 2429640 Oct 11 04:00 pkg.sql.xz.5
-rw-r–r– 1 root wheel 2429640 Oct 10 03:01 pkg.sql.xz.6
-rw-r–r– 1 root wheel 2429640 Oct 9 03:01 pkg.sql.xz.7

but it’s not working as advertised…

root@websrv:/var/db/pkg # pkg backup -r /var/backups/pkg.sql.xz
Restoring database:
Restoring: 100%
pkg: sqlite error while executing backup step in file backup.c:99: not an error
pkg: sqlite error — (null)

root@websrv:/tmp # pkg backup -r pkg.sql
Restoring database:
Restoring: 100%
pkg: sqlite error while executing backup step in file backup.c:99: not an error
pkg: sqlite error — (null)

always results in an fresh but empty /var/db/local.sqlite..

root@websrv:/var/backups # pkg info

manual fix

root@websrv:/var/db/pkg # pkg install sqlite3
Updating FreeBSD repository catalogue…
FreeBSD repository is up-to-date.
All repositories are up-to-date.
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
sqlite3: 3.8.11.1_1

root@websrv:/var/backups # cp pkg.sql.xz /tmp
root@websrv:/var/backups # xz -d /tmp/pkg.sql.xz
root@websrv:/var/backups # cd /var/db/pkg
root@websrv:/var/db/pkg # mv local.sqlite local.sqlite.broken
root@websrv:/var/db/pkg # sqlite3 local.sqlite
SQLite version 3.8.11.1 2015-07-29 20:00:57
Enter “.help” for usage hints.
sqlite> .read /tmp/
Display all 1316 possibilities? (y or n)
sqlite> .read /tmp/pkg.sql

tada

root@websrv:/var/backups # pkg info
ap24-mod_mpm_itk-2.4.7_1 This MPM allows you to run each vhost under a separate uid and gid
apache24-2.4.16_1 Version 2.4.x of Apache web server
apr-1.5.2.1.5.4 Apache Portability Library
autoconf-2.69 Automatically configure source code on many Un*x platforms
autoconf-wrapper-20131203 Wrapper script for GNU autoconf
automake-1.15 GNU Standards-compliant Makefile generator
automake-wrapper-20131203 Wrapper script for GNU automake
binutils-2.25.1 GNU binary tools
bison-2.7.1,1 Parser generator from FSF, (mostly) compatible with Yacc
boost-jam-1.55.0 Build tool from the boost.org
boost-libs-1.55.0_9 Free portable C++ libraries (without Boost.Python)
ca_root_nss-3.20 Root certificate bundle from the Mozilla Project
cmake-3.3.1 Cross-platform Makefile generator
cmake-modules-3.3.1 Modules and Templates for CMake
curl-7.44.0 Non-interactive tool to get files from FTP, GOPHER, HTTP(S) servers
db5-5.3.28_2 The Oracle Berkeley DB, revision 5.3
dialog4ports-0.1.5_2 Console Interface to configure ports

Sep 22
dovecot: master: Warning: service(imap-login): process_limit (256) reached, client connections are being dropped

So you found this error message in your server logs and noticed you cannot login anymore via IMAP because all available ‘slots’ have been consumed. My working theory is, it is related to the IOS7 release and it’s IPv6 support.  With privacy extensions enabled ( See RFC 4941 ) it looks like the IOS device is grabbing a new IPv6 address every time it wakes up. This is a perfect fine behaviour if it weren’t for IMAP IDLE. From my understanding, IMAP idle is sort of a long lasting SSL tunnel with a very long TTL. This way your client doesn’t need to be online all the time and only wakes up, if there anything new on the server ( RFC 2177 ).

Workaround: Break IMAP IDLE. Not really, but at least reduce the TTL for the Tunnel by somewhat. Don’t be to harsh, otherwise your’e mobile device is gonna wake up too often, therefore consuming too much power. I set it to 30minutes.

protocol imap {
 #process_limit = 512
 # process_min_avail = 5
 imap_idle_notify_interval = 30 mins
 mail_max_userip_connections = 10
}
#You can check your configuration by using "doveconf -N" before reloading/restarting.

If it works, dovecot will drop the connections to not responsive ‘expired’ IPv6 IPs, freeing resources.

Sep 22 11:04:36 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=273/1229
Sep 22 11:04:37 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=237/1161
Sep 22 11:04:37 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=189/1073
Sep 22 11:04:37 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=222/1132
Sep 22 11:04:37 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=294/1268
Sep 22 11:04:37 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=273/1229
Sep 22 11:04:37 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=222/1211
Sep 22 11:04:40 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=246/1180
Sep 22 11:04:40 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=288/1258
Sep 22 11:04:40 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=252/1190
Sep 22 11:04:40 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=273/1229
Sep 22 11:04:40 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=267/1219
Sep 22 11:04:40 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=288/1258
Sep 22 11:04:40 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=252/1190
Sep 22 11:04:40 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=165/1104
Sep 22 11:04:40 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=210/1112
Sep 22 11:04:40 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=216/1201
Sep 22 11:04:40 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=237/1161
Sep 22 11:04:40 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=189/1073
Sep 22 11:04:40 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=237/1161
Sep 22 11:04:40 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=273/1229
Sep 22 11:04:41 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=216/1122
Sep 22 11:04:41 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=258/1200
Sep 22 11:04:41 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=201/1172
Sep 22 11:04:41 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=252/1190
Sep 22 11:04:41 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=231/1151
Sep 22 11:04:41 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=231/1151
Sep 22 11:04:41 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=252/1190
Sep 22 11:04:41 mx dovecot: imap(email@server): Disconnected: Disconnected in IDLE bytes=189/1073
Tagged with:
Feb 22

Installation completed successfully.

with 8x 3TB drives, 2 SSD
But lets start from the beginning…

Delivery & Hardware System Setup

Installation

FreeBSD Installation with ZFS Setup

  • since i am pretty much a FreeBSD noob, i have been googl’ing a lot and came up with a few things to consider.
    • Sector Alignment – New hard-drives, including SSDs, show 512k sectorsize to the Operating system but use 4k blocks in reality. This matters because, today, where you have multiple layers in between (harddisk, lvm, filesystem, database tablespaces etc.) .. if you screw this up, every single i/o call will multiply, reducing performance dramatically and also puts more wearing on your SSDs. So make sure you have formatting etc. in place to use 4k sectors.
    • there are a few tools on FreeBSD to simulate 4k blocks. Also using encryption allows you to set fixed block sizes.
    • Another thing to consider when partitioning, leave some space between the start of the disk and your first partition. I started at 2megabytes. This helps because its a multiply of 512k and maybe if you need to replace a disk a few years later, you have some room in case the new drive has different sectors as your previous hard-drive.

     

        1. I used the latest production release, FreeBSD 9.1 as DVD image. Burned it onto the disk and used the following pages to guide me through the installation.
        2. my /boot/loader.conf
          zfs_load="YES"
          aesni_load="YES" #Xeon has hardware support for AES which will get used by geli
          geom_eli_load="YES"
          ahci_load="YES"
          vesa_load="YES" # we want a better console
          if_lagg_load="YES" # required if you want to bond multiple devices
          loader_logo="beastie" 
          vfs.root.mountfrom="zfs:zroot"
          geli_ada0p3_keyfile0_load="YES"
          geli_ada1p3_keyfile0_load="YES"
          geli_ada0p3_keyfile0_type="ada0p3:geli_keyfile0"
          geli_ada1p3_keyfile0_type="ada1p3:geli_keyfile0"
          geli_ada0p3_keyfile0_name="/boot/encryption.key"
          geli_ada1p3_keyfile0_name="/boot/encryption.key"
          hw.snd.latency=7
          vm.kmem_size_max="64G"
          vm.kmem_size="48G"
          # zfs tuning i picked up from all the pages everywhere
          vfs.zfs.prefetch_disable="1"
          vfs.zfs.txg.timeout="120" # i believe this is the timer when to actually write to disk. Beware 120seconds loss of data if you dont have a USV.
          vfs.zfs.txg.synctime_ms="500"
          vfs.zfs.arc_max="20G"
          # low level tuning of the vdev device
          vfs.zfs.vdev.min_pending=4
          vfs.zfs.vdev.max_pending=10
          vfs.zfs.vdev.cache.size=64M
          vfs.zfs.vdev.cache.max="65536" 
          vfs.zfs.vdev.cache.bshift="16"
          # Usefull if you are using Intel-Gigabit NIC
          hw.em.rxd=4096
          hw.em.txd=4096
          hw.em.rx_process_limit="-1"
          kern.maxvnodes=250000
          atapicam_load="YES"
        3. my /etc/sysctl.conf
          #basically energy settings, trying to send most of the cores into c-state 3 but keep one core at C2
          dev.cpu.0.cx_lowest=C2
          dev.cpu.1.cx_lowest=C3
          dev.cpu.2.cx_lowest=C3
          dev.cpu.3.cx_lowest=C3
          dev.cpu.4.cx_lowest=C3
          dev.cpu.5.cx_lowest=C3
          dev.cpu.6.cx_lowest=C3
          dev.cpu.7.cx_lowest=C3
          # XEON supports 200Mhz rate :)
          debug.cpufreq.lowest=200
          # Network Tuning, more buffers etc.
          net.inet.tcp.rfc1323=1
          kern.ipc.maxsockbuf=16777216
          net.inet.tcp.sendspace=1048576
          net.inet.tcp.recvspace=1048576
          net.inet.tcp.sendbuf_max=16777216  
          net.inet.tcp.recvbuf_max=16777216
          net.inet.tcp.sendbuf_auto=1
          net.inet.tcp.recvbuf_auto=1
          net.inet.tcp.sendbuf_inc=16384 
          net.inet.tcp.recvbuf_inc=524288
          net.inet.tcp.inflight.enable=0
          # ZFS tuning
          vfs.zfs.prefetch_disable=0
          vfs.zfs.l2arc_write_max=200000000 # ssd can deliver more than 8MB/sec speed, beware the wearing that comes with it
          vfs.zfs.l2arc_write_boost=380000000 # ssd can deliver more than 8MB/sec speed, beware the wearing that comes with it
          vfs.zfs.l2arc_noprefetch=0
  • if there are any BSD guru’s out there reading this, any hints especially in regards to power consumption are very welcome. The system right now with the specs above, shows ~52W power consumption (with 4x3TB WD red installed) which is more than 100W less as my previous server 🙂

Update 24/Feb/2013: I’ve installed my old 3TB seagate barracuda drives now (4x). Alternate says they have a power consumption of 8W each. I am using the 4 remaining SATA 3GB ports on the Supermicro Board, instead of the 4 remaining 6GB ports on the LSI card. Despite only half the speed (not that they would reach that anyway), they were properly detected in FreeBSD, which means i could send them IDLE/STANDBY commands via camcontrol. So i created a zpool called “FILES ARCHIVE” with those disk and set them to go into standby mode as soon as possible. With this setting, only the 4 WD red drives and SSDs are active, the system consumes ONLY 56watts. Considering my old system used more like 160watts, i am pretty happy with the result 🙂

Dec 30

Mission Goal

So whats this all about… I’ve been running a homeserver for a few years now, using different kinds of linux distro’s. The server usually more or less built from leftovers of previous workstations. The old system also includes 2x 3slot SATA harddisk enclosures, connected to a promise raid controller. The system was a 2.6Ghz core2duo with 4GB memory and 2x 250GB system drives. The system was storing all our data, videos, photos via Samba/Netatalk and took backups (bacula) of my various servers on the net. It was also running a nagios monitoring installation. Everything was encrypted with the usual linux LUKS stuff.

PC V2120

So why a new system?

  • Energy consumption too high. As you may know, energy isn’t as cheap anymore. With the recent changes in Germany and some not so clever decisions made by our fearless leaders, the prices for energy skyrocketed and the pressure to built intelligent less power consuming systems keeps on growing. While i totally agree on shutting down nuclear power plants, i don’t like the way how renewable energy is subsidized and how big energy companies profit by putting the burden on the customer. Actually i think, core infrastructure such as energy, water, public transport etc. should be handled by the government or non-profit organizations, rather than companies.
  • Next generation filesystem. Everyone’s generating lots of data these days. Documents, Scans, Photos and Videos of your family, your music etc. Maybe you already convinced your family to create backups, do this extra step and bought them a time capsule or similar. You have a raid in your homeserver and you are aware of the fact, harddrives wear out over time..  Now considering, you store your digital life for centuries and do everything right, keep replacing disks and so on.. collecting terrabytes of data over the years.. you won’t even notice if any of your data is damaged until it is too late. Here comes ZFS to the rescue, by taking care of the integrity of your files. By generating checksums for all your files and keeping multiple copies of it. It will detect broken files and replace them automatically with the working copy. A nice side effect for ZFS, it includes also compression and deduplication. On Solaris it even includes an encryption layer.

 

Requirements

  1. Data integrity. The ZFS Filesystem is said to require 1GB memory per TB stored, especially if you want to use deduplication, since this feature will hold big hashtables in memory. Afaik, only the Oracle Solaris 11 supports the builtin ZFS encryption layer. But Solaris is a commercial system, requiring you to have some support contract if you use it in production. While this isn’t necessarily a problem for a homeserver, the supported hardware list for solaris is very short. So i will try to stick with FreeBSD and take the performance hit it comes with by using GELI encryption below the ZFS layer. Since i will be using SSD drives for the main OS, i try to get some performance back by adding some SSD caching to the ZFS. I would also love to use ECC memory.
  2. diskspace should be at least 6-8TB usable space, with room for additional drives
  3. As for energy, i am trying to get the most performance at the lowest energy. Performance is required for encryption, compression, occasional realtime transcoding. Depending on how it works out, everything should spin down or step down if it is idle.
  4. Remote Access
  5. Low Noise

Hardware so far…

  • Intel Server Board S1200BTLR – Sockel1155, SandyBridge, 2xGbit, 2x SATA3, 4x SATA2, onboard GFX, up to 4x8GB ECC DDR3 Memory, IPMI 2.0 ~200EUR
  • Intel Xeon E3-1265LV2 – TDP 45W, QuadCore 2.5Ghz, IvyBridge, AES-NI, HT, Turbo Boost, VT flags ~300EUR (wikipedia)
  • Adaptec 7805 – 8x Port SAS/SATA Raid Controller, lowprofile, 1GB cache, support for FreeBSD, support for drives with 4TB size, up to 256 drives using expander. ~500EUR
  • cheaper “dumb” controller LSI SAS 9207-8i for ~200EUR even got Solaris Support.. and FreeBSD.
  • Lian Li PC V2120 – beautiful, low noise chassis with plenty of room for more disks ~400EUR
  • Western Digital Red Harddrive – 3TB SATA3, low 24dB Noise, 24/7 support, Low power: 4,4W read/write, <1W sleep/standby power consumption ~140EUR

 

  • Update: I found a mainboard with USB3 and ECC. Have a look at Supermicro X9SAE.
  • Update: Order is out to 3 different stores. I took the Supermicro X9SAE-V which allows two x8 PCIe3.0 instead of a single x16 PCIe3.0 .. considering i may need more disks some day and a 2nd LSI SAS 9207-8i Controller. Rest as expected, 3x 3TB WD red drives, Xeon E3-1265LV2 but a slight change for the chassis, its a Lian Li PC-P80N now, since its just standing in some dark corner anyway, so no need to look beautiful.. but not getting my hands bloody on the installation and having 10 3,5″ harddrive slots as well as 2 2,5″ slots for 2 ssd drives, is nice. I will be using two Sandisk G25-G3 Extreme SSD 2,5″ 120GB for the system itself and as cache for the ZFS. The reason for the sandisk was it is pretty fast Sata3(6Gbit), 550MB read/510MB write, a good MTBF of 2.500.000 hours and as expected a low 0,6W read/write energy consumption. For the operating system, i will see how a solaris 11.1 installation will work out. If it fails to support anything or does not fully use powersaving features, i may still consider FreeBSD on the 2nd run 🙂
  • Latest Update: No free updates, no solaris.

 

This post will be updated as they come in.

Tagged with:
preload preload preload