domingo, 14 de febrero de 2021

Malditos youtubers...

Lleva varias semanas coleando el tema de que "El Rubius" se marcha a Andorra. A pagar menos impuestos, dicen. Menudo insolidario, ¿no? Las cosas, como casi siempre, no son tan simples como parecen y en este post me gustaría exponer mi punto de vista, así como explicar lo que, a mi juicio, es la raíz del asunto. 

 

El verdadero problema no es que cuatro youtubers de éxito se vayan a Andorra a tributar menos. Eso es el chocolate del loro que permite a los medios de comunicación masivos generar noticias “salseantes” al respecto. Noticias que no son mas que una cortina de humo para que no atiendas al problema real: que la gente se va de España por la falta de oportunidades, estabilidad, salarios dignos, seguridad, la posibilidad de crear una familia.... 

 

Yo mismo emprendí ese camino en 2014 harto de sueldos bajos y tanta precariedad que no nos permitía a mi pareja y a mi mas que vivir al día en un pisito alquilado muy modesto. De conciliación ya ni hablamos, tarea pendiente aún a día de hoy. Y es la mejor decisión que pudimos haber tomado: nuestra vida desde que emigramos a Irlanda cambió radicalmente. Personalmente, ahí descubrí que mi talento era reconocido y bien pagado. Pudimos ahorrar, casarnos, comprar una vivienda y hasta tener nuestro primer hijo. ¡Eso en solo tres años! En 2019 decidimos volver a España, por la familia y porque, si el trabajo es bueno, en España se vive muy bien. Pero me aseguré de volver trabajando para la misma empresa para la que trabajaba en Irlanda y con condiciones salariales similares a las que cobraba allí, que en España sería considerado un "privilegiado" (y no, no soy un privilegiado. Lo que pasa es que, en España (de media), se cobra muy poco). Ahora en breve vamos a tener nuestro segundo hijo y el piso que compramos está casi pagado.

 

En España las empresas pagan mal y la estabilidad es terrible. Para que esto funcione hay que hacer cambios de calado para simplificar todo mucho. Al mismo tiempo, hay que hacer mas inspecciones para asegurar que nadie, ni empresas ni trabajadores, hacen trampas para pagar menos impuestos. Porque nos quejamos de los ricos que evaden, pero luego viene el fontanero y nos dice de hacernos la factura sin IVA y somos los primeros en decir que adelante. La conclusión es que, en España (por general), cada uno evita pagar impuestos al ritmo que puede. Lo que debería de verdad hacer España para evitar que 2 millones de personas se vayan del país para buscarse una vida mejor es mejorar el marco laboral y tributario para que eso no fuera necesario. Dos millones de personas menos dejando de pagar impuestos son muchos, y aunque no cobren como "El Rubius", en volumen total mueven mucho mas dinero en impuestos que este chico. Hay que simplificar tramites, contratos, y a la vez ser mas exhaustivos verificando que las cosas que hagan bien. Mi mujer, por ejemplo, estando en Irlanda creó una empresa para las cosas que hace: todo el trámite fue fácil y online. Solo tuvo que firmar un documento en mano que el gobierno le remitió por carta a nuestra dirección postal y que llegó con un sobre para devolver el papel firmado con franqueo pagado. No pagaba cuota de autónomo, solo un porcentaje de la facturación que dependía del volumen de facturación realizado en el año, con un tramo en el que se pagaba 0% bastante amplio (no recuerdo, pero mínimo eran 5.000 euros de facturación o 10.000. Facturando menos de eso no pagaba N-A-D-A). Eso es facilitar de verdad el emprendimiento y la creación de pequeñas empresas, y no el papeleo terrible que hay en España, además de tener que pagar una cuota de autónomo altísima totalmente incomprensible para personas que están empezando. Que sí, que hay un autónomo mas bajo para quienes inician su actividad de unos 50€, pero es por un tiempo limitado mientras que en Irlanda no pagas mientras tu negocio no empiece a facturar fuerte. No tienes la presión de que el negocio tiene que facturar bien en un año o la cuota de autónomo será inasumible. ¿No tiene eso mas sentido que nuestro modelo?

 

En bastantes países nórdicos se pagan muchos impuestos, en algunos mas que en España. Pero si preguntas a la gente por ello te dicen que no les importa pagarlos porque ven que esos impuestos se emplean bien para cubrir sus necesidades. Yo en España, volviendo a un ejemplo personal pero seguro que extrapolable a mucha gente, en el centro médico que me toca apenas hay médicos para atención primaria. En periodos de vacaciones, o si alguno se pone malo y tiene que cursar baja, se ha dado con frecuencia que no haya nadie que te atienda. Para este segundo embarazo hemos tenido que ir por seguro medico privado porque la matrona del público ni llamaba a mi mujer para establecer el calendario de visitas. Es muy difícil defender un sistema que ves que pagas muchos impuestos (con lo que yo pago mensualmente de impuestos se cubren, fácilmente, dos pensiones - y no exagero, ni me parece mal) pero al mismo tiempo ves que los servicios están bajo mínimos. Cada vez que tienes que recurrir a algún servicio público es una lucha, literalmente, para conseguir atención en muchas de las áreas y casi todo por teléfono, que parece que la digitalización aún no haya llegado a España de verdad (en Irlanda todos los trámites se podían hacer online con webs muy sencillas de entender, aun no siendo nativo - la web de Hacienda en España tienes que hablar con un experto contable para que te la explique - cuando funciona...) y seguimos funcionando básicamente como hace 20 años: ve en persona a donde sea, llama por teléfono (teniendo que insistir muchísimo para que alguien te atienda). De nuevo, es muy complicado justificar un sistema que ves que, en muchas ocasiones, no funciona.

 

Y, por último, a mi me gustaría saber en qué se emplean mis impuestos. Por lo menos eso, y tener una mínima capacidad de decisión en que se invierte mi dinero. Porque si, como está pasando, se toma dinero publico para experimentos extraños o directamente para financiar partidos, no estoy nada de acuerdo. Me gustaría poder decidir que mis impuestos solo se usen para cosas que tengan impacto en la sociedad, como educación, sanidad, pensiones, carreteras... poder priorizar a donde va mi dinero. Yo no quiero que con mi dinero se mantengan partidos políticos, los partidos deben ser mantenidos por sus militantes, y quiero que mi dinero vaya antes a sanidad o educación que, a defensa… por ejemplo. Pero no puedo: ni saber donde va a ir mi dinero ni tener una mínima capacidad de decisión sobre él. Entended que, en este contexto, mucha gente decida irse.

 

Emigrar es una decisión personal compleja y creedme que no es algo que se decida a la ligera. Irse puede dar mucho miedo, pero ante la falta recurrente de expectativas en España al final a muchos no les queda mas opción. ¿No sería mejor que el país, en lugar de apuntar con el dedo a cuatro youtubers, sea mucho mas maduro y analice la situación para comprender realmente la raíz del problema y así cambiar aquello que hace, no solo que se vayan cuatro youtubers, sino mas de dos millones de personas? La primera opción no nos lleva a nada bueno, ya que quien no ve el problema no está dispuesto a cambiar nada para solucionarlo. Criminalizar a los youtubers es el camino fácil que nos deja donde estábamos, con un sistema que, de facto, no funciona. La segunda, mas madura y realista, lleva tiempo y esfuerzo. Pero en un futuro dejará un marco mejor para que todos tengamos mas y mejores motivos para establecer España como nuestro lugar de residencia.

 

 


 

domingo, 16 de agosto de 2015

FTP active mode in AWS

FTP is one of the oldest Internet protocols. Unfortunately, it was designed for environments where clients and servers interact with each other with a minimum of restriction. Therefore, FTP protocol doesn't work well in scenarios including NAT and/or Firewalls.

As you may know, AWS instances from OS perspective live in a network with private IPs. When an instance (EC2 classic or VPC) needs to communicate with another VPC, region or outside AWS network, some special devices handle the NAT translation between the private and the public IP. This AWS mechanism provides you maximum flexibility, allowing you to move a public IP from one instance to another easily, for example.

FTP protocol was designed without this concepts in mind. So, in complex network including NAT and/or Firewalls, the protocol doesn't work well. An example is FTP active mode. Imagine next scenario:
  • One instance (FTP client) in Frankfurt region
    • Private IP: 172.31.16.100
    • Public IP: 52.28.244.154
  • Another instance (FTP server) in Tokyo region
    • Private IP: 172.31.14.185
    • Public IP: 52.69.174.237
  • Security groups allow any traffic between both instances
 If we try to communicate using a standard FTP client such as ncftp with active mode, you will experience next issue:
  1. FTP client will be able to connect
  2. FTP client will be able to authenticate
  3. FTP client will fail when trying to list files



The issue is well covered in next public documentation from ncftp team. Please, take a look on previous information (specially the 'Why PORT Poses Problems for Routing Devices' section).

Inside the instance, FTP client can only see private IPs. So, when the client try to connect with the remote FTP server, it will send private IP information. If we perform a packet capture in FTP server side during previous test, we will see how the FTP protocol information inside the packet includes 'PORT' request using the private IP of the FTP client instead of the public IP (check the packet highlighted in blue):


Because of this, FTP data connection from server will fail (basically: FTP server won't be able to find a route back to source instance).

To work around this, I recommend you:
  1. Use FTP passive mode instead of active. The passive mode works better under NAT/Firewalls. Most of the FTP services include parameters to specify the public IP address to handle this kind of scenarios.
  2. Use another method to send files. FTP is an old protocol and has many limitations about functionality and security. Probably, move to a modern transfer protocol (like SSH) could be a good alternative.
  3. Use a FTP client with options to specify public IP. FileZilla ftp client has a specific parameter to specify the public IP when using active mode.
  4. Patch FTP to specify Public IP instead of Private IP. With this method, both instances will be able to establish and complete the communication. 
If for whatever reason you need to use FTP active mode in AWS and the 3rd option is not possible, I suggest you to patch the FTP client to workaround the issue. For example, you can obtain the public IP of an instance from metadata information. 

I created a patch for the latest stable version of ncftp client to replace the private IP in 'PORT' request with the public IP (if available through metadata) when the FTP Server (destination) is not a private IP. The result: the FTP server is able to establish the FTP data connection and complete the request, even in active mode. Example output of the same test with the customized FTP client:



Please, note in the previous tcpdump output the highlighted packet in blue how the 'PORT' directive includes the public IP of the instance instead of the private IP.

If you want to use this customized ftp client, you can download the patched source-code version from the next link.

Also, if you want to patch from the vanilla version, download the latest stable version (v3.2.5) of ncftp from here and patch next files:

 The customized ncftp client requires development curl libraries (libcurl-devel package, in Amazon Linux) in order to compile correctly.

Finally, here you can see patch details:

configure

libncftp/ftp.c

lunes, 8 de septiembre de 2014

AWS: Recovering keypairs (Linux)

As you know, keypairs are used to connect to AWS instances. During launch process, you select keypair associated to each one. All keypairs have two parts: private key (the PEM file you download from AWS console when it's created) and the public key. Public key is configured inside authorized_keys file associated to login username.

If authorized_keys file is modified, there are ownership/permissions issues associated to this file or .ssh directory where authorized_keys is stored, keypair will be refused and you will get a Permission denied (publickey) error message:


When this occurs, main problem is you won't be able to login to your instance. To resolve this issue an easy way could be:
  • Stop the instance
  • Create an image from faulty instance
  • Launch a replacement instance using this new AMI. During launch process, make sure to select a known keypair (or create a new one)
An alternative procedure to recover unhealthy instances could be use a third (healthy) instance and repair keypair. In our example we have an instance named WIMKP instance with a keypair named testkey associated:


Unfortunately, something happened and we're not able to login using testkey.pem file. To repair it, we'll need to follow next steps:
  • Launch a work instance. We'll use this instance to perform all required operations. When finished, you can terminate this instance. It'll be only required during procedure. Make sure you launch work instance in the same availability zone where unhealthy instance is hosted.


  • From AWS web console, EC2 service, Instances section, stop unhealthy instance:


  • Go to Volumes section and search for root volume associated to unhealthy instance. Set an appropriated name to easily recognize it (in my example, I established ROOT WIMKP). Also, don't forget to copy device name associated to root volume of unhealthy instance (in my example: /dev/xvda). We'll need this information later:


  • Right click over root volume of unhealthy instance and select Detach Volume. Wait until volume becomes available:


  • Right click over root volume of unhealthy instance and select Attach Volume. Select work instance and attach volume as a secondary volume for this instance (by default, it'll be attached as /dev/sdf device). Wait until attached:


  • Copy keypair file inside work instance and login to work instance:


  • As you can see in previous screenshot, review dmesg output to know details about how root volume of unhealthy instance has been recognized. In my example, device was named internally as /dev/xvdf1. If you obtain unknown partition table message, this means secondary volume is identified as /dev/xvdf. Please, take this under consideration to adapt next mount command according to your scenario:
  1. sudo mkdir /disk
  2. sudo mount /dev/xvdf1 /disk
  • Now, inside /disk directory root volume of unhealthy instance is mounted. So, we can review content and repair, if required. Because ec2-user is the username required to connect to WIMKP instance, I'll check files and directories associated. Feel free to adapt next check commands according to your needs. For example, Ubuntu instances use ubuntu as default login username. So, with Ubuntu instances, you'll need to review home directory associated to ubuntu username instead of ec2-user:


  • By default:
  1. home directory should be owned by root with 755 permissions
  2. ec2-user home directory should be owned by ec2-user with 700 permissions
  3. .ssh directory inside ec2-user home directory should be owned by ec2-user with 700 permissions
  4. authorized_keys file inside .ssh directory should be owned by ec2-user with 600 permissions

  • If ownership or permissions are not correct, repair them (in my example, .ssh and authorized_keys ownership are incorrect):


  • The commands (again, make sure to understand the concept and adapt according to your specific scenario):
  1. sudo chmod 755 /disk/home
  2. sudo chmod 700 /disk/home/ec2-user
  3. sudo chmod 700 /disk/home/ec2-user/.ssh
  4. sudo chmod 600 /disk/home/ec2-user/.ssh/authorized_keys

  • Finally, don't forget ownership. To know correct UID and GID numbers associated to login username, inspect passwd file with next command (don't forget to replace ec2-user with your login username):
  1. sudo cat /disk/etc/passwd | grep ^ec2-user:
  • In my example, 500:500 is UID:GID associated to ec2-user. So, I need to run next command to repair ownership:
  1. sudo chown -R 500:500 /disk/home/ec2-user


To verify keypair is correct, inspect authorized_keys file to be sure public and private key are related. To check it, just run next commands (don't forget to replace testkey.pem with filename associated to your private keypair and ec2-user with your login username):

  1. chmod 600 testkey.pem
  2. ssh-keygen -y -f testkey.pem
  3. sudo cat /disk/home/ec2-user/.ssh/authorized_keys
If keypair is correct you should obtain the same string in 2. and 3. previous steps. Example of correct output:


If not, you need to replace keypair. To do it, follow next steps:
  1. ssh-keygen -y -f testkey.pem | sudo tee /disk/home/ec2-user/.ssh/authorized_keys
  2. sudo chmod 600 /disk/home/ec2-user/.ssh/authorized_keys
  3. sudo chown -R 500:500 /disk/home/ec2-user
From previous commands make sure (as always) to replace testkey.pem with your keypair file, ec2-user with login username and 500:500 with UID:GID associated to your login username. Example:


Done. Now we can umount /disk and mount root volume associated to faulty instance:
  1. sudo umount /disk
  • In AWS web console, EC2 service, Volumes section, detach root volume of faulty instance from work instance. Wait until becomes available.


  • Attach root volume of faulty instance to faulty instance. Don't forget to put device name you copied previously (in my example: /dev/xvda) to attach root volume as root volume. Wait until attached.


Finally, in AWS EC2 web console, Intances section, select faulty instance, right click over it and select Start. Wait until started. If everything was correctly done, you should be able to login now using your existing keypair.

Bonus track

If you want existing users can perform sudo commands without password, login to your instance and add next line to /etc/sudoers file replacing username with the username you want to grant sudo permissions:

username ALL=(ALL) NOPASSWD: ALL

Last, next shell script named keypair.sh could be useful if you need to create new users establishing different keypairs, repair ownership/permissions or reset existing keypairs. Just copy the shell script inside your instance and use it. The script is designed to be run by an username with root permissions (or an existing username enabled to perform sudo commands as root username). Feel free to use it!

NOTE: Previous procedure won't work with Marketplace based instances. This kind of instances have signed devices and because of this you won't be able to perform attach/detach actions. If you need to recover information from faulty Marketplace instances, contact with AWS Support team.

domingo, 27 de julio de 2014

AWS: Convert T1 instances to T2

AWS released new T2 instance type recently. New instance type only support HVM virtualization. What happen if you have a T1 PVM instance and want to move to T2? Because  PVM virtualization is not supported in T2, you can't directly change instance type using AWS EC2 web console. But you can convert it! To do it, I suggest you to follow next guide.

NOTE: If you are going to use this guide with production instances is highly recommended to create an image before proceed.

The procedure has been tested using Amazon Linux instances, but should work with any other Linux flavors: Ubuntu, SUSE, Redhat, etc.

Imagine you have one T1 micro PVM instance with next characteristics:




First, login to the instance and ensure is updated. If not, I recommend you to update the instance to the latest stable version using apt-get/yum commands:




Wait until instance is updated and reboot, if needed. Verify everything is correct. I´m also going to create a file named info.txt just to know this is the root volume associated to my T1 PVM instance:




Now, launch a new T2 instance. Don't forget to:
  • Select a compatible T2 AMI closer to your current T1 AMI with same architecture (32 or 64bits)
  • Select same size for root volume
  • Select same instance type (if original T1 instance is t1.micro, select t2.micro)
  • Launch T2 instance in the same availability zone (AZ) of your current T1 instance
Example:




Stop both instances. Now, go to AWS EC2 web console, Volumes section and:
  • Set an easy-to-remember name for each volume
  • Review attachment information of T2 HVM root volume (at least) and take note of device name. In my example: /dev/xvda. We'll need this information later.
  • Detach both root volumes
After these steps, both volumes should be available in AWS console:





Now, we need to launch a new instance. We'll use this instance to perform required changes, so work instance seems a good name. Don't forget to launch this new instance in the same availability zone of T1 and T2 instances:




Attach T1 PVM and T2 HVM volumes as secondary volumes in work instance:




As you can see, T1 PVM volumes has been attached in /dev/sdf device and T2 HVM volume in /dev/sdg device. This information is important because will help us to identify volumes inside work instance.

Now, login to the work instance and run dmesg command to know how volumes are identified by the kernel:




According to dmesg information, /dev/sdf (PVM volume) is associated to /dev/xvdf and /dev/sdg (HVM volume) to /dev/xvdg1. Create two mountpoint directories, one for each device, and mount them. Required commands:
  • sudo mkdir /pvm
  • sudo mkdir /hvm
  • sudo mount /dev/xvdf /pvm
  • sudo mount /dev/xvdg1 /hvm

In the next step we're going to backup current HVM kernel. To ensure maximum compatibility, preserve current HVM kernel is recommended. You can skip this step if you're 100% sure PVM kernel will work with the new T2 HVM instance. You will need to backup /boot directory inside /hvm and modules associated to the running kernel. To know what is the active kernel, review /hvm/boot/grub/menu.lst file. Will give you the information to know what modules are required to backup:




For our example, next commands will be required:
  • sudo cp -prf /hvm/boot /tmp/
  • cat /hvm/boot/grub/menu.lst . Active kernel is tagged 3.10.42-52.145.amzn1.x86_64. So, command to backup modules will be:
  • sudo cp -prf /hvm/lib/modules/3.10.42-52.145.amzn1.x86_64 /tmp/

Next steps will be: remove all files from HVM volume, copy files from PVM volume to HVM and restore HVM kernel. The commands:
  • sudo rm -rf /hvm/*
  • sudo cp -prf /pvm/* /hvm/
  • sudo rm -rf /hvm/boot
  • sudo cp -prf /tmp/boot /hvm/
  • sudo cp -prf /tmp/3.10.42-52.145.amzn1.x86_64/ /hvm/lib/modules/




Important: review root label in kernel configuration, fstab file and root filesystem. In order to work, they need to be the same. In previous screenshot, volume labeled as / will be use as root device by kernel configuration and fstab file. Requesting information about HVM volume label, I get the same value. Finally, HVM volume filesystem is ext4. So, nothing additional is required in my example. If you find differences in your environment, you'll need to modify /hvm/boot/grub/menu.lst, /hvm/etc/fstab and/or HVM volume label to solve it. Otherwise, instance start-up will fail.

If all is correct, umount both volumes:
  • sudo umount /pvm
  • sudo umount /hvm
Go to AWS EC2 web console, Volumes section, and detach both secondary volumes (T1 PVM and T2 HVM volumes) from work instance.

Now, attach HVM volume as T2 instance root volume. In AWS EC2 web console, Volumes section (Important: don't forget to put as device name the value you copied previously. In my example: /dev/xvda):




Optional: If T1 instance has an Elastic IP assigned, I suggest you to associate this IP to the new T2 instance. To do it, just disassociate Elastic IP from T1 instance and associate to the new T2 instance in Elastic IPs section. Example:






All done. Now you can start and login to the new T2 instance using the old Elastic IP. As you can see in the next screenshot, files contained are the same T1 PVM root volume, but using the new T2 instance type:





If after procedure you experience issues during T2 instance start-up, in AWS EC2 web console select the instance, right click and select Get System Log. If there is any issue associated to the kernel, filesystem or volume label; useful information for troubleshooting will be displayed here.

When all work as expected, you can remove old T1 and work instance. Be sure you won't need them before remove because as soon as you delete them won't be able to access again.

domingo, 29 de junio de 2014

AWS: How to auto attach public IP from ElasticIP pool

When an EC2 instance is launched, you can select to attach a Public IP (or not). This Public IP is randomly selected. Here is an example:


i-cf27aa8d instance has 54.72.151.117 Public IP address assigned. This IP is taken from general AWS pool. If you stop and start the instance from AWS EC2 console, a different Public IP address will be selected.

Now, imagine instance launch is controlled by an auto-scaling policy. Following explained behavior, a new random Public IP address will be attached each time a new (or replacement) instance is launched attending auto-scaling group needs.

In the context of need a predictable Public IP addresses, general behavior doesn't fit our needs. When instance is launched manually is easy to resolve: you can attach an IP address from Elastic IP pool through AWS EC2 console, for example. But this could be difficult in an auto-scaling context.

A way to resolve this situation could be using ipassign script. Let verify this with an example!

Imagine previous instance (i-cf27aa8d). To use ipassign script we need to:

  • Verify AWS CLI and ec2-utils packages are installed. By default, Amazon Linux instances arrive with them pre-installed. For Ubuntu distributions, probably you'll need to install them manually using apt-get commands:

  • Login to the instance as root username. If role instance is not assigned, you need to configure AWS CLI with an IAM user allowed to execute describe-addresses, associate-address and disassociate-address EC2 actions.

  • Install ipassign script. Just follow next instructions:
  1. Download ipassign script and copy inside your instance in "/etc/init.d" directory
  2. Modify script permissions: chmod 755 /etc/init.d/ipassign
  3. Add script to instance startup process: chkconfig ipassign on
  • Review ipassign configuration. At the beginning of the script, there are two parameters you need to review and ensure are correctly configured:
  1. REGION: Defines the AWS region. Value must be the same used by instance. By default is set to eu-west-1 (Ireland) region.
  2. IPLOGFILE: Defines log file. By default is set to "/var/log/ipassign.log" and my suggestion is maintain this value.

Done! If we restart the instance, during startup process will try to attach a free Public IP address from Elastic IP pool. Imagine we have three IP address associated to our account, all of them are currently in use:


In this context, instance can't attach any Public IP. Script is designed to avoid changes if an IP from Elastic IP pool can't be attached. Next time we login to the instance, if we review log file will see an error message registered:


Just go to AWS EC2 web console and request a new Elastic IP:


Now (with a free Public IP in Elastic IP pool), if we restart instance again, we'll see ipassign script can find one free IP address in Elastic IP pool and attach it to the instance:


Login to the instance (now, using the new Public IP attached) and checking log file, next information is displayed:


Finally, in instance general information panel, we can review how the instance has a new Public IP address (54.76.166.221) from the Elastic IP pool correctly assigned:


By default, 5 Elastic IP addresses can be associated to an AWS account. But this limit could be increased, if needed.


viernes, 6 de junio de 2014

AWS: Convert root volume to XFS

By default, root volume in Amazon Linux instances uses EXT4 filesystem. But maybe you want to use another one, for example XFS. With next procedure you'll be able to convert default root volume filesystem of an existing instance to XFS. For our example, we've an instance named MyInstance using default Amazon Linux distribution:


After login, as you can see default root filesystem device (/dev/sda1 | /dev/xvda1) is EXT4:


Here is suggested steps to successfully achieve the filesystem conversion:
  • Login to the instance and become root
  • Install XFS utils: yum install xfsprogs xfsprogs-devel xfsdump
  • Stop the instance
  • Create a snapshot of root volume

  • Create a new volume from the snapshot. Make sure you don´t modify size and select same availability zone where original root volume of instance is hosted


  • Start the instance and wait until become available. After that, login to the instance and become root
  • Attach new volume as a secondary volume. By default, /dev/sdf device will be selected. This device is mapped as /dev/xvdf in modern kernels. Run dmesg command to review your kernel successfully detect the new attached volume

  • Install Development Tools: yum groupinstall 'Development Tools'
  • Download Fstransform toolkit from here
  • Uncompress, configure, compile and install Fstransform toolkit

  • Now, run: fstransform /dev/xvdf xfs
  • Previous command will convert /dev/xvdf from original EXT4 filesystem to XFS. Process will take time, depends on volume size. Be patient and make sure everything is correctly done. fstransform will provide detailed information about the process. Make sure everything is correctly done. 

  • Label /dev/xvdf device as '/'. Just run: xfs_admin -L \/ /dev/xvdf
  • Create a mountpoint directory, for example /xfs, and mount /dev/xvdf in /xfs directory. Edit fstab file associated to the new XFS volume (/xfs/etc/fstab) and make sure / is associated to volume labeled / and xfs filesystem is configured for root mountpoint

  • Stop the instance
  • Detach original root volume
  • Detach XFS volume
  • Attach XFS volume as root volume. Make sure you specify same device associated to the original root volume (for Amazon Linux instances usually is /dev/sda1
  • Start the instance
Now, your instance should start. Login and verify root volume now is XFS


If there is any issue during instance startup, review System Log in AWS EC2 web console. Useful information for troubleshooting will be provided (if required).