Note : in this how to, we will use NAT-Traversal. This is not officially supported because of the lake of QoS, however for a PoC or a Small office, this is enough.
Step – 1 : Configuring Site To Site IPsec VPN with Microsoft Azure and UbiRouter.
Create a vNet with your first subnet.
vNet has to have those parameters ( IP range is up to you though ) / Dashboard > New > Networking > Virtual Network :
Be Carefull, you cannot overlap networks, and Ubiquiti networks does not allow you to have same network prefixes, meaning, I cannot vpn-tunneling 192.168.0.0/17 ( or more than 17) with 192.168.X00.0/24 .
Do not forget to create public IP address for the gtw.
The deployment of the gtw may take some time , between 5 to 10 mins…
Create the local Network Gateway ( with your public of your internet box provided by the Network provider ) Dashboard > New > Networking > Local Network Gateway:
Create a VPN Connection and link the LocalGateway to the VirtualGateway ( the connection will not work yet ) / Daskboard >Virtual Network Gateways > VirtualGateway > Connections > + Add :
Now 2 possibilities, either you want to setup thru command lines only, so just go and copy past those commands :
configure
set vpn ipsec auto-firewall-nat-exclude enable
set vpn ipsec ike-group FOO0 key-exchange ikev2
set vpn ipsec ike-group FOO0 lifetime 28800
set vpn ipsec ike-group FOO0 proposal 1 dh-group 2
set vpn ipsec ike-group FOO0 proposal 1 encryption aes256
set vpn ipsec ike-group FOO0 proposal 1 hash sha1
set vpn ipsec esp-group FOO0 lifetime 27000
set vpn ipsec esp-group FOO0 pfs disable
set vpn ipsec esp-group FOO0 proposal 1 encryption aes256
set vpn ipsec esp-group FOO0 proposal 1 hash sha1
set vpn ipsec site-to-site peer 7.7.7.7 authentication mode pre-shared-secret
set vpn ipsec site-to-site peer 7.7.7.7 authentication pre-shared-secret <PASSW>
set vpn ipsec site-to-site peer 7.7.7.7 connection-type respond
set vpn ipsec site-to-site peer 7.7.7.7 description ipsec
set vpn ipsec site-to-site peer 7.7.7.7 local-address 192.168.1.1
set vpn ipsec site-to-site peer 7.7.7.7 ike-group FOO0
set vpn ipsec site-to-site peer 7.7.7.7 vti bind vti0
set vpn ipsec site-to-site peer 7.7.7.7 vti esp-group FOO0
set interfaces vti vti0
set firewall options mss-clamp interface-type vti
set firewall options mss-clamp mss 1350
set protocols static interface-route 10.1.0.0/16 next-hop-interface vti0
commit ; save
Or you have an existing connection and you want to edit it.. So In that case, you go back to the UbiRouter to set up the VPN / VPN > IPsec Site-to-Site > +Add Peer :
Edit : In my example, I wanted to connect subnet 192.168.100.0/24 on purpose. However, if you want to create several session, I would recommend putting 192.168.1.1 in “local IP” which is your router IP. Then Add several subnets Local+remote
******* ALL SET *********
net.ipv4.conf.vti0.disable_policy = 1net.ipv4.conf.vti0.disable_xfrm = 1[ vpn ipsec site-to-site peer XXXXXPUBLICipOFmyAZUREgtwXXXXX vti ][ system offload hwnat disable ]This change will take effect when the system is rebooted.
Now you should see something like this on your UbiRouter :
On the connection page in Azure portal, you should normal see a connection “success” and connected. Sometimes, my connection appears as “unknown”, This is a display issue, no worries on that.
In this topic, We will discuss the ability to easily deploy an infra using CLI ( aka Command Line Interface ). We will deploy a 3 tier app on azure.
Let’s imagine your a company and every time you want to create marketing events for a given situation. The cool thing will be to have a template to deploy every time you need it and just do some small adaptions ( change the background picture of the webpage for example ).
The development repository of the app ( front and back ) are stored in Github here and here.
Using Azure you have the possibility to test the CLI in your machine or directly in the web browser ( see below ) .
In this thread we will focus on CLI from my machine.
Before doing the “copy-past” part of the demo, let’s discuss about my sample script.
First go to github and download my 2 bash scripts here .
In this rep, you will find 2 Script files. Basically, you just copy past into your CLI to get the 3tier app running. However, due to some network latencies ( on the CLI machine) , I do not recommend to copy-past the entire script at once, but copy/past part of it.
IMPORTANT NOTES :
In my script, I often used some customer infos ( like password, name of the RG, you will find my name in the script some times, just do a ctrl+F and replace my name with yours ).
The script will apply the commands in a serial fashion, if you want to save some times, just open several window and copy-past the 3 tiers separately.
For Front end : Based on ubuntu 14. User/Password is the script. After deploying every VM, a script file is applied to set up the server. The script file is downloaded from azure, but if you want to have your onw, just change the link https://rgcloudmouradgeneralpurp.blob.core.windows.net/exchangecontainermourad/sh_bootstrap_pu.sh with your own. Basically, the front-end is a php page were a customer is registering himself for an event.
For back end : There is flask API running on a python app. This python app is taking the input from the frontend and fill a DB
DB server : It is a single mysql instance. The script actually install a mysqlserver and load a dump from Azure blob storage, you can dump this table later if you want.
Every time a resource is deployed you will get a confirmation as a json format :
It takes a while, especially the VM deployment and the extension script ( the script that is installing apache, and download from GIT repo ) At the end of the 2 scripts, you will go in your resources on azure portal ( adapt it if you want you will have this infra ) :
Now let’s try the app.
In my script I asked the dns name of the IP load balancer to be : “demofrontweb”, you will have to add the region where the IP is located ( eastus for me ) and “cloudapp.azure.com” generic dns for azure embedded offer : http://demofrontweb.eastus.cloudapp.azure.com/ .
I register and submit :
For the purpose of this demo, the only way to connect to DB server is to ssh to a VM that will be used as a control VM with a public ssh access to internet with only port 22 open.
Here is the connection, DB selection and Table :
To confirm my request has been recorded, I go in the database server and search for my details :
In this post we are going to discuss on how we could leverage azure devops to do IaC as code. there is plenty ways to address the use case, but I will do easiest way here. first step will be to use it for CI. For CD, Ansible will be used along with Azure devops.
End target is to create and operates HUBs ( prod and pre-prod in a CI-CD fashion )
In this post we are going to discuss on how we could leverage azure devops to do IaC as code. there is plenty ways to address the use case, but I will do easiest way here. first step will be to use it for CI. For CD, Ansible will be used along with Azure devops.
Start creating an Azure Devops projects. Since we are going to simulate production and pre-production environment for IaC, we are going do some segregation for compliance and governance concerns.
Assuming you have created an organization already, I created the project IaCHUBv1 :
I went on github and created 2 private repos (empty for the moment ) , one for production and second for pre-production, As I want to have the possibility to manipulate each one at will.
No I go to Azure and I am going to use 2 subscriptions, one for production and other one for pre-production environment
On those 2 subscriptions, I am going to create 2 services principals that are going to be the identity that will do deployment. You can use this command line to check :
az account list --output table
This will give something like this :
I am going to select to first and third with this ( you will need to repeat the to command below 2 times, one for each subscriptions ) :
az account set --subscription <Azure-SubscriptionId> az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/SUBSCRIPTION_ID"
This will give you the service principals ( example for the first one ) :
These values will be mapped to the Terraform variables this way, note those value in a safe place for the moment ( you will not be able to get them back ) :
appId (Azure) → client_id (Terraform).
password (Azure) → client_secret (Terraform).
tenant (Azure) → tenant_id (Terraform).
Now go to the terraform plugin for azuredevops and just install it to you organisation
We need to authenticate to Azure DevOps using SSH keys. If we have SSH keys, we can skip this step, and jump to the next one.
ssh-keygen -C "email@domain.com"
The process will generate two files id_rsa and id_rsa.pub files. The first one is the private key (don’t share it) and the second one is the public key.
After we generated the SSH keys, it is time to upload them to Azure DevOps. We open the Azure DevOps website, click on our profile picture, then click on 3 dots and finally click on user settings option.
On the menu, we click on the SSH public keys option:
Now we add the empty repos previously created :
Repeat the operation for every repo you want to add
Thenk go on the files section to get url of files :
No I go to my laptop were my code is sitting and I will git init the location
I will type those from the location I selected
git init git remote add origin git@ssh.dev.azure.com:v3/mouradcloud/IaCHUBv1/iachubv1preprod git add . git commit -m "commit with terraform files upload initial" #there is a readme file in the repo, you first need to pull before pushing : git pull git@ssh.dev.azure.com:v3/mouradcloud/IaCHUBv1/iachubv1preprod #then push git push --set-upstream origin master
You will be able to see uploaded files :
Creating now the Azure pipeline to deploy our IaC
Then select source :
then empty jobs :
We will change the name of the Agent. As for now, we will be using Azure devops free build agent. We will be able to add agent pool later if more power is needed, but this is not free, you will need to declare VMs that are going to do it.
Then we will add a task to copy files :
We choose the Repos source folder, and select to copy all content. We will set the target folder as : $(build.artifactstagingdirectory)/Terraform
We click the plus sign (+) to add a new job. Type Publish Build Artifacts and leave it with default parameters:
There is the possibility to use the following option that is going to trigger a CI when the source repo is modified and changes are committed ( we will not activate this for the HUB, there is no special interest since this is a near-static environment. However, we can Create new CIs using sub folder in the terraform modules if we want to add minor changes into the configuration.
we do not check the box !!
Then Queue it for the CI to be collected by an available agent than is going to be run
It is creating a build for you, which is basically, nothing more than copy files from the repo to the agent that will run later tasks.
Now, while checking the logs :
You should receive an email telling you that your build is successful. If the status of the job is Sucess, we are ready for the next step, where we are going to create the Release Pipeline.
On this stage, we will use the artifact generate on the build pipeline and create a Stage task with these following tasks:
In the Select a template page, we choose an Empty job template:
Then we click on the Add an artifact button.
In the Add an artifact page, we choose the Build button and configure the Source (build pipeline) to use the build pipeline created on the previous step.
We click the Add button, DO NOT (yet) click on the lightning icon and activate the CD (Continuous Deployment), we keep this of other use not for HUBs.
We close the Continuous deployment trigger page and rename the pipeline:
Now, we need to configure the Stages. Click on the Stage 1 button to rename the stage name.
We close the Stage name page and then click on the 1 job, 0 task link on Terraform deployment button.
We click on the plus sign (+), next to the Agent job and search for terraform.
We select the Terraform Installer task and click on the Add button next to it.
The Terraform Installer task is added with the latest version of Terraform, but I am changing it to another :
Then we add terraform command for init . The terraform init command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run.
We select the Terraform CLI task and click on the Add button next to it, in this step, we will configure Terraform CLI for Terraform Init.
Configure the init Command, the Configuration Directory to use the drop/Terraform folder of the Build Pipeline and select azurerm in the Backend Type dropdown.
We have not configure the AzureRM yet, so this is the perfect moment.
Expand the AzureRM Backend Configuration and select an existing Azure Subscription. If we don’t have an Azure Subscription configured, we click on + New button to configure one.
It is taking you to the connections pages :
Then, we select the Service principal (manual) option.
Then you will enter the above information that you received when you created the service principals :
After registering all connections, you will see them available there :
Then, we configure the Azure Remote Backend and we have a few options:
Use the Create Backend (If not exists) option
Use an existing Remote Backend created with scripts if one existes
Automate the process adding an extra task on the pipeline.
In order to validate, we will just start the release to make sure every thing is working fine, we will do the terraform apply afterwards
So it created a Storage account in a ressource group for service usage and this will be used to store the state of terraform files config ( not that it is windows logs… I forgot to validate to linux ubuntu for agent… I will correct this ) .
Let’s go now to the terraform validate, plan and apply
However, I will do it in a 2 steps :
Step 1 : Terraform Init validate and plan
Step 2 : adding terraform apply in a second task and I want to be sure to have no error before applying changes ( this is useful if i want to bring management validations )
Also, (optional but recommanded) you can adjust the retention, otherwise you will find yourself with a lot of artifacts , releases etc and you might get lost rapidly..
Optional Too : you can report to Azure Devops boards the release pipeline tasks :
Unfortunately, for the second task we need to bring a second “init” step because we need to make the terraform job to have the details for state :
Do not foreget to set the second task to manual otherwise the deployment will be automatic :
then create the release :
Seems OK ……
Till the end of the deployment :
Just to finish this, we can use keyvault to store secrets and call the secrets using environment variables. If secrets are present, you just need to refresh and you will see it.
Remember we created an azure key vault thru Terraform and updated a secret thru it.
For the sake of the demonstration, I will add a new secret thru Az shell portal.
first we need to get user shell data :
az ad user show --id <email-address-of-user>
You will be looking for the “objectId” value, or just :
az ad user show --id <email-address-of-user> --query "objectId
Then allow your user to manipulate :
az keyvault set-policy -n <your-unique-keyvault-name> --spn <ApplicationID-of-your-service-principal> --secret-permissions get list set delete --key-permissions create decrypt delete encrypt get list unwrapKey wrapKey
Then finally create a secret entry :
az keyvault secret set --vault-name "ccppdhbkvlt" --name "badpassword" --value "hVFkk965BuUv"
Then you select the secret you want to import :
Then save and go to your release pipeline the variable section and link rhe variable group that is in keyvault :
Now you just need to use the variable ‘badpassword’ in the variable.tf and variable.tfvars files and you are good to go. This can be typically used to store password for VMs for instance..
In the previous article I used Azure DevOps for CI. After some times, Ansible seems very handy when it comes to CD ( continuous integration) mainly for the configuration and post configuration. I have some Linux VMs than need to be configured and a DNS server too. for my 2 Hubs.
Based on my previous topology below :
I would like to do operational routines like :
Update routes on my NVA
Update nginx configuration files for my sites ( WAF )
Install DC role on my DC/DNS and maybe to change it
I am going to create a connection between my Ansible VM and Azure DevOps and I will CD thru the code repository . My Ansible VM is management subnet
Before creating your client, you need to create a PAT ( Personal access Token), so that your application can call azure devops and interact with it in a REST manner.
But before going further, a good practice is to use a service account created in Azure Active Directory with the role DevOps Administrator :
Now that you have the role and logged with it, you can go straight to Azure DevOps.. :
Give it a name and allow some right to it. In my case, I want to start some release pipeline when I am going to trigger calls..
Make sure you save the token…
Let’s code now. Create the client using your favorite language ( python SDK source here ) . first you install the library :
pip install azure-devops
Then code client will be :
from azure.devops.connection import Connection from msrest.authentication import BasicAuthentication import pprint
# Fill in with your personal access token and org URL
# Get a client (the "core" client provides access to projects, teams, etc)
core_client = connection.clients.get_core_client() # Get the first page of projects get_projects_response = core_client.get_projects() index = 0 while get_projects_response is not None: for project in get_projects_response.value: pprint.pprint("[" + str(index) + "] " + project.name) index += 1 if get_projects_response.continuation_token is not None and get_projects_response.continuation_token != "":
# Get the next page of projects
get_projects_response = core_client.get_projects(continuation_token=get_projects_response.continuation_token) else: # All projects have been retrieved get_projects_response = None
And… voila :
Now the fun part, let’s trigger a pipeline !
Here is the synthax to run a pipeline :
from azure.devops.connection import Connection from msrest.authentication import BasicAuthentication import pprint
# Fill in with your personal access token and org URL personal_access_token = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXX' organization_url = 'https://dev.azure.com/mouradcloud' project_name = "IaCSpokeStorageNFS1" pipelines_name ='IaCSpokeStorageNFS1-CI-preprod' pipelines_id =13
# Create a connection to the org credentials = BasicAuthentication('', personal_access_token) connection = Connection(base_url=organization_url, creds=credentials)
# create a Get a pipeline client def create_pipeline_client(): # Get connection from devOps & create pipeline client pipeline_client = connection.clients_v5_1.get_pipelines_client() return pipeline_client
# Get the first page of pipeline pipeline_client = create_pipeline_client() pipeline_list_response = pipeline_client.list_pipelines(project_name)