
Up until recently, I was familiar with the idea of client pulling down information from a database. A database requires updates, patching, and maintenance. Usually, I’d be the one doing all that care and feeding, additional to taking proper care of the client application. Recently I’ve been playing with api-ninjas.
API-ninjas requires a free account, and provides back to you an API key. Some of the things you can pull include:
- Airline flight information
- Cocktail recipes
- Conversion of Currency
- DNS lookup
- Holidays
- IP information
The list is quite exhaustive. An API is a great alternative to that pesky database maintenance above. Api-Ninja includes code on pulling the API’s in python, C#, ruby and so forth. However, it did not include anything about powershell.
Below, I’ve pasted code to get a random fact of the day:
$apikey = "###supersecretapikey###"
$params = @{
"Uri" = "https://api.api-ninjas.com/v1/facts?limit=1"
"method" = "get"
"headers" = @{"X-API-Key" = $apikey
}
Invoke-RestMethod @Params | fl
##Sample output##
fact : A sneeze zooms out of your mouth at over 600 m.p.h
To use or change this code, change the Uri paramenter above to the value given by api-ninjas. Examples include:
https://api.api-ninjas.com/v1/cocktail?name=$input #for taking input for cocktail recipes
https://api.api-ninjas.com/v1/iplookup?address=$IP #for taking input for IP address
I came across a situation the other day.
In my Azure Tenant, I have a VM, a domain controller that hosts, well… my domain.
I only use it for testing, most recently I was doing some SSPR testing. I only turn it on occasionally for testing some powershell scripts, this password reset utility, and other things that only an on-premises Domain Controller can really do.
Over time, over about 2 weeks I didn’t need it and had this server sitting in a powered off state. When I did need it again, after powering it on, I realized I couldn’t login with my Domain Admin credentials. The error was that my password had expired, and I needed to reset it.
Okay, I’ll use my backup Domain Admin account to reset it. The problem was, the backup Domain Admin account was giving the same error.
Uh-oh.
My primary, and backup domain admin accounts to my one cloud controller that is not replicated anywhere are both locked out. Now what?
Luck has it, there’s as way to do this that’s fairly painless and actually quite simple.
- Create a .ps1 file. The only contents it needs are one line:
Net user AD-Admin NewP@ssword!
Name it something relevant like “password_reset.ps1”
This HAS to be an account that’s active in your AD, and preferably a Domain Admin account. The password can be whatever you want, as long as it fits your password domain policy.
2. Goto portal.azure.com -> Storage accounts -> any_of_your_storage_accounts ->containers (create one if you have to) -> upload. Upload the .ps1 file you created in step 1 above.
3. In portal.azure.com -> Virtual Machines -> Your_VM_DC -> Settings -> Extensions -> Add a Custom Script Extension

Browse to the storage container in step 2, point to the .ps1 file created in step 1

Let the deployment run


6. Log onto your DC VM in Azure with the credentials from step 1 above. RESET any or all your domain admin passwords that have that requirement.
7. Uninstall and delete that Custom script extension from step 3 for this VM. Otherwise, every time it boots it will reset the password for this one user.

Delete that .ps1 file from the storage container too!
There’s a few different methods to import users into your Azure tenant.
- In the Azure Active Directory Portal https://aad.portal.azure.com -> Users -> Bulk Operations -> Bulk create
- Or you can use a little powershell
This will focus on the powershell method. Mainly because the Azure Portal only requires point and click. Plus, this is way more fun.
The Sample CSV format:
UserPrincipalName | DisplayName | GivenName | Surname | jobTitle | MailNickName | ObjectId | AccountEnabled | AgeGroup | City | CompanyName | ConsentProvidedForMinor | Country | CreationType | Department | FacsimileTelephoneNumber | IsCompromised | ImmutableId | Mobile | PasswordPolicies | PasswordProfile | PhysicalDeliveryOfficeName | PostalCode | PreferredLanguage | ShowInAddressList | State | StreetAddress | TelephoneNumber | UsageLocation | UserState | UserStateChangedOn | UserType |
nabendun@customdomain.onmicrosoft.com | Nabendu Nahasapeemapetilon | Nabendu | Nahasapeemapetilon | $null | 104667339 | True | Minor | Springfield | null | United States | null | 856-511-6827 | 304-960-7231 | Guder Lao | 2810 | Nepali | Illinois | 43090 Jay Drive | 314-812-4954 | US | Member | ||||||||||
jimboj@customdomain.onmicrosoft.com | Jimbo Jones | Jimbo | Jones | $null | 142259518 | True | Minor | Springfield | null | United States | null | 546-298-0636 | 558-695-5632 | Purabaya | 2810 | Hebrew | Illinois | 39176 Weeping Birch Court | 851-166-3492 | US | Member |
And here’s the sample code below.
Make sure before you run to execute connect-azureAD first!
$CSV = Import-Csv C:\path_to_CSV_file.csv -Delimiter ","
foreach ($User in $CSV) {
$user.UserPrincipalName
$user.DisplayName
$user.GivenName
$user.Surname
$user.jobTitle
$user.MailNickName
$user.ObjectId
$user.AccountEnabled
$user.AgeGroup
$user.City
$user.CompanyName
$user.ConsentProvidedForMinor
$user.Country
$user.CreationType
$user.Department
$user.FacsimileTelephoneNumber
$user.IsCompromised
$user.ImmutableId
$user.Mobile
$user.PasswordPolicies
$user.PasswordProfile
$user.PhysicalDeliveryOfficeName
$user.PostalCode
$user.PreferredLanguage
$user.ShowInAddressList
$user.State
$user.StreetAddress
$user.TelephoneNumber
$user.UsageLocation
$user.UserState
$user.UserStateChangedOn
$user.UserType
Set-AzureADUser -ObjectID $user.UserPrincipalName `
-jobTitle $User.jobtitle `
-AgeGroup $User.AgeGroup `
-City $User.City `
-CompanyName $User.CompanyName `
-Country $User.Country `
-Department $User.Department `
-FacsimileTelephoneNumber $user.FacsimileTelephoneNumber `
-Mobile $User.Mobile `
-PhysicalDeliveryOfficeName $user.PhysicalDeliveryOfficeName `
-Postalcode $user.PostalCode `
-State $user.state `
-Streetaddress $user.StreetAddress `
-TelephoneNumber $user.TelephoneNumber `
-UsageLocation $user.UsageLocation
write-output $User
}

Introduction
A little over 7 years ago, I purchased a DS413J. It was everything I needed; lots of storage, ample power, and served media in the house suitably well. Fast-forward to 2020, the DS413J is feeling rather aged. The Web UI and 2FA login sometimes takes a little longer than two minutes to fully login. Transfer speeds at 30Mb/s feels unimpressive, and it takes sometimes up to 10 minutes to reboot.
I decided it was time to get into a DS420+. This would serve as my main file/media share while to leverage the CPU, upgradeable RAM and much improved performance.
History
Synology mainly deals in networking products. The company started with consumer network storage, and have expanded into IP surveillance, and consumer router hardware. Synology’s network storage is pliable across consumer to SMB all the way to corporate SAN. This is also where they really shine. A NAS – Network Attached Storage runs file shares without the overheard of a running server that consumes space, cooling, network, licensing, and power. Most of the NAS models – the DS series, which I’ll cover below are small, quiet, and very unassuming.
The consumer NAS market is competitive, with names like QNAP, Terra Master, Western Digital, Drobo, and Buffalo to name a few. While I won’t go into each of those name brands, I typically see consumers here in Canada picking between QNAP and Synology.
If you’ve ever wondered about the naming convention of the Synology NAS devices, I’ve broken it down here:

- 1 – Leading letters [DS][RS][DX]. DS – Diskstation (the formfactor you see here). RS – RackStation (Rack mounted NAS). DX – Diskstation Expansion, and so on.
- 2 – The first number(s). Sometime a single digit. This is the maximum amount of internal drives the NAS can house, with expansion units. [ie. A 1812+ = 8 disks in unit, with 10 extra disks from expansion units allowed]
- 3 – The last 2 digits. Demarks the year released. [DS413J = released in 2013, DS420+ = released in 2020]
- 4 – The very last character denotes the performance. This does change depending on the market segment. Generally, the most common ones are J= home entry level, Play = media specific functions with some encoding, Plus (+)= performance level, XS = Top tier specifications.
Audience
Who buys a NAS? Who is it meant for? A NAS is meant for anyone with lots of data that needs to be securely and safely stored in a central location. I emphasize ‘central’ because we all know the pain of multiple USB drives. While convenient, they do end up in odd places or sometimes misplaced when you need them.
This is where a NAS steps in. One location for storing all the files, easily accessible by smart devices, and more flexible and cost friendly over cloud storage. A NAS can also stream media; which means you have the option to watch any owned, stored media on your device of choice. And, no streaming service fees either.
The Synology Diskstation Manager also offers a massive menu of different applications; security, webhosting, authentication, and surveillance. For guys like me, there’s Virtual Machine manager, Radius Server, Active Directory integration – the list keeps growing.
Hardware
CPU |
Intel Celeron J4025 2-core 2.0GHz, burstable up to 2.9GHz |
Memory |
2GB DDR4 [expandable to 6GB] |
HHD Bays |
4 x 3.5″ or 2.5″ SATA HDD/SSD (not included) 2 x M.2 2280 NVMe SSD (not included) |
USB |
2 x USB 3.0 (front and back) |
LAN |
2 x 1Gbe RJ-45 |
AC |
100 V to 240 V AC |
HD Drive bays are all plastic and screwless. Everything has markings for sliding into the standard 3.5″ HDD pin holes. Included are screws for 2.5″ HDD’s as well. Once the HD’s are in the unit, they’re snug with no vibration. There’s also a Synology Key for each drive bay to lock each independently. The front of the unit has indicator lights for status, each individual drive, and the power button. One USB 3.0 connection in the front, and one USB 3.0 in the back. Sadly, there’s no Esata connection for expanded / backup storage. The double RJ-45 connections can also be used independently, teamed, or for failover.
Network protocols |
SMB,AFP,NFS,FTP,WebDAV,CalDAV,iSCSI,Telnet,SSH,SNMP,VPN (PPTP, OpenVPN, L2TP) |
File System |
-internal: Btrfs, ext4 -external (connected via usb): Btrfs, ext4, ext3, FAT, NTFS, HFS+, exFAT |
RAID types |
SHR (Synology Hybrid RAID), Basic, JBOD, RAID 0/1/5/6/10 |
SSD Cache |
-read/write cache support -M.2 NVMe SSD Support |
File Sharing Capacity |
-Max local user accounts: 2048 -Max local groups: 256 -Max shared folders: 512 -Max concurrent SMB/NFS/AFP/FTP connections: 500 |
Virtualization |
Vmware Vsphere 6.5, Hyper-V, Citrix, OpenStack |
Software
Once again, the Disk Station Manager web GUI is flawless. On initial boot you’re asked to install the latest DSM, then format any installed Hard Disks. After it reboots again, it’s off to configure your RAID storage. Interesting note here, the official spec sheet mentions Synology Hybrid RAID (SHR) as an option. On first install with 2 disks, SHR was available.

After installing another 2 disks, SHR was absent? I have a feeling the option was quietly removed to favor disks of the same size to fit industry standards.

SHR has the ability to protect disks of different sizes. This isn’t a deal breaker to me, but it’s worth noting for someone that’s looking for this functionality. Just to point out, it IS best practice to use disks of all the same size for any sort of RAID configuration.
The Web GUI is incredibly quick and responsive. This largely because of the Intel Celeron J4025 processor and 2GB DDR4 RAM. Even after adding 2Factor authentication, it’s much speedier than my 413J. Creation of shares, installation of new packages, configuring Media services and Video station are easy and intuitive. During my initial burn in period, I mounted some external CIFS shares around my network to copy the data onto this 420+. I was never disappointed, the new DSM even provides an estimated time of completion for large jobs.
Usability
Disk Station Manager (DSM) rocks. Simple as that. Super robust, quick, snappy, it just does everything that regular desktop machine would do, just within the browser. Anything is at your fingertips within DSM. Some of the things I use on a regular basis are Hyper Backup, File station (when I want to do CIFS to CIFS transfers), Synology Drive and Storage Manager.

Everything is intuitively set up. I do recommend setting Control Panel to ‘Advanced Mode’. Just in case you want to see things like the indexing service, external devices, Terminal or Privileges icons. All things are very straightforward, and the help menu is surprisingly, well, helpful. Customization of the login screen, desktop background, color theme, even image or icons are available. I’ve enabled 2FA for login, email notifications, quickconnect, media services all just by clicking around menu’s. The interface is simple enough to get you to your location, yet sophisticated and secure enough to give me comfort when I leave the house.
Features
Super Feature packed. I’ve noticed the Plus (+) series of Synology NAS offers much more packages than the plain “J” series. There’s even a beta package section I’ll be trying out soon. Each new feature brings new items to tweak, and more value to the Synology. Just the other day I configured Replication services, and Synology drive, next up will be Directory server.
It really is a dazzling array of programs this little NAS can run. There’s multiple sites that report using this strictly as a 4K Plex Server. I’ve even seen a few startup businesses using some of the bigger + (plus) models for storage and security with IP cameras. These really are customizable to no end, and based on the new up-and-coming Kubernetes images, these could one day replace traditional server technology.
The 420+ also offers an M2 cache buffer. I’m not quite using it yet, perhaps when I try out mail station or get heavier into web development I’ll populate the drives.
This also has an upgradeable RAM slot on the right of the unit to compliment the current 2GB DDR4. I’ve already got a 4GB stick in there – not best practice, I know; it should ideally be a matching 2GB stick. But I had an extra stick that matched the voltage lying around and thought I’d give it a shot. It’s been 3 weeks without any sort of hiccup.
The Android App store also has many of the general items, like file, video, audio, moments and DS cam. I also noticed there’s a Synology Chat icon in there, which I’m sure complies with secure communications between you and some friends. I’ve been using the DS finder since I have 2 NAS’s in the house, and it’s been great looking over the current usage when I run backup jobs or kube containers.
Verdict
Absolutely worth every penny! Speed, security, feature rich, and reliable name brand. Synology is really improving their DSM with every release, DSM 7.0 is already beta testing, which hopefully is a general release within 2020. My only complaint is a missing e-sata connection in the back of the unit. I could use some of the bigger DX expansion series – if I ever could fill that much space! For the price, the included features, the never-ending applications for any sort of business or personal need, this is another near perfect offering from Synology.
The last firmware released for the DNS323 was back in 2013. That was quite a while ago, and it wasn’t great. It lacked SMB2, ssh out of the box, and no development of popular applications. I tried Alt-F on a spare DNS323 as a test to see if I could get rsync up and running.
This isn’t meant to be an expansive entry of the pro’s and con’s of this firmware. This is supposed to be a straight forward approach of configuring the DNS323 as a rsync target for backups compatible with synology dsm 6.3.
Let’s not kid ourselves, this device is pretty old. The last time it was sold any where was around 2007. As of this writing that was 14 years ago. The processor is 500MHz, it’s got 64MB of RAM, the max data transfer possible is 10MBps. I do NOT recommend putting any sort of production or super-important data onto this. I’m using this because I love to tinker, and I have an over-abundance of spare harddrives. So please, as interesting as this entry is, if you want something with performance look at a modern NAS and drives with warranty and up to date specifications!
Moving along…
The coles notes version of alt-f installation:
- Download the latest alt-f firmware
- Log into your DNS323 and apply the alt-f firmware
*I take no responsibility past this point. These instructions are recommendations, and should not be taken verbatim. This is not an official support channel. Take all the necessary precautions to backup your data beforehand.
3. Create a login password, this will also act as your ‘root’ password too.
4. Format your disks. EXT2/EXT3/EXT4 and few others are available.
It’s your choice to stick with a RAID 1/0 or JBOD. I’m using older disks and this is strictly backup for my purposes.
Create the Rsync User
Let’s create an rsync user first.
Setup -> Users
Note the full name is the “windows name”, where the nic name is the “linux login” name. Take particular note of the linux name, this is what the synology needs to initiate a backup.

Create a folder and Share
Now we’ll need to create a share to mount the backup.
Setup -> Folders

Note the mounted drives. I configured mine independently.
- Sda2 – 500GB drive
- Sdb2 – 1000GB drive
I gave mine a share name of “backup_share”. Then hit ‘create’.
Once created, change permissions accordingly.

With the drive folder and permissions set, now configure the share.
Services -> Network -> smb -> Configure

Create a share based on the folder you created ealier


As a test, make sure you can browse the share from windows explorer
Ie. (\\DNS323\backup_share)
Use the username and password you created above. Make sure you can create files and folders. Notice you can enable SMB1, and SMB2 from this panel. I tried to disable SMB1, but that just made the share disappear from my Windows 10 explorer. Could be a bug they’re working out.
Side Quest – SAMBA module
There’s also an ‘Advanced’ button in Samba Setup. Use the same root password to see the contents.

This panel is a bit more graphical in presentation. And gives a good representation with the ‘view’ icon of the current shares published. Spend a little time looking around, there could be some tweaks you could find useful in this section.

Rsync Service Setup
Let’s setup this DNS as the rsync target.
Services -> Network -> inetdS

Hit ‘configure’ on the rsync service

Configure a new folder based on the path and user you created above.

- It’s easier to use the built-in browser to get to your folder. Otherwise if you know it already you can enter it here. Remember, this is linux, all the directory slashes are ‘/’
- The module name is the viewable share name in Windows
- Add your comments as necessary
- Set permissions for the rsync account created above
Now, let’s validate the folder created above (ie. /mnt/sdb2/backup_share) exists in the rsync configuration folder. We’ll use an SSH client for this. Just regular connection with root@DNS323 works. Goto \etc and more on rsyncd.conf.
The top line should give the location of rsyncd.secrets – a password encrypted file that only rsync users should have access to.
And the bottom portion should provide the recently created directory with permissions for your rsync user.
PS C:\> ssh root@dns323-2
root@dns323-2's password:
[root@DNS323-2]# cd etc
[root@DNS323-2]# more rsyncd.conf
#Sample contents
secrets file = /etc/rsyncd.secrets
use chroot = yes
read only = yes
dont compress = *
list = yes
...
…
[rsynrsyn]
comment = rsync backup directory
path = /mnt/sdb2/backup_share
auth users = rsynrsyn
uid = rsynrsyn
gid = users
read only = no
You can tweak this to do things like host allow within a certain subnet. For this, I’m just focusing on getting rsync running.
While you’re in here, have a look at your rsyncd.secrets file. Ideally, this should only give one rsync user with password. Something like
rsynryn:password
DSM – Setup HyperBackup
Now we can create a backup job and target the DNS323 (with alt-f firmware). Create a new backup job, choose rsync as the file server type.

Settings should be similar to below.
For the backup settings, configure the Server type as ‘rsync-compatible server’, enter in the pertinent details of your DNS323. It should look similar to the screenshot below. For port, just keep the default 873. The Backup module, make sure to use the exact same “Path” from the rsyncd.conf file.
ie. path = /mnt/sdb2/Backup_share
Backup module = /mnt/sdb2/Backup_share
Directory = Backup_directory
And this creates a new directory of whatever name you want.

After this you should be able to select items to backup. Set your items, schedule them and make use of the rotational backups (very handy).
Be aware of the speeds, even if you have SMBv2 enabled, the backup jobs are still pretty slow over rsync. Still hovers around 1.2MB/s. So time your backups accordingly, and be aware that DSM Hyperbackup cannot do simultaneous backups.
-Dexter