Vuex Error With TypeScript

Vuex is a cool library for vue that allow the project to control state easier. However, I was trying to use it with typescript. It causes error like this. I use championswimmer/vuex-module-decorators library to support vuex annotation.

ERR_ACTION_ACCESS_UNDEFINED: Are you trying to access this.someMutation() or this.someGetter inside an @action?
That works only in dynamic modules.
If not dynamic use this.context.commit("mutationName", payload) and this.context.getters["getterName"]
Error: Could not perform action login
at Store.eval (webpack-internal:///./node_modules/vuex-module-decorators/dist/esm/index.js:311:33)
at step (webpack-internal:///./node_modules/vuex-module-decorators/dist/esm/index.js:96:23)
at Object.eval [as throw] (webpack-internal:///./node_modules/vuex-module-decorators/dist/esm/index.js:77:53)
at rejected (webpack-internal:///./node_modules/vuex-module-decorators/dist/esm/index.js:68:65)
undefined

I dont understand why it happends because I did not acess any mutation or getter. I was confused. So, I modify my code again and again to see whats wrong. I return a Promise in action and I rejected it. And it cause the same error. So, I search it on google. It turns out that is an issue. The library warps the error, so the error is confusing. All I did then is add rawError option to the annatation so it becomes @Action({rawError: true}). And it display error normally.

Useful Git commands and Clone local

Git is a strong version controll system. It allows you to track code and verison, and keep all the data you need to go back to certain version.

This is so good that we can use it even if we are working solo. We can see how the project grows. So, this is how to setup a project locally.

Steps

Here are some steps to prepare you own repository.

  1. you will need to create a project by cd /path/to/project/. Then use git init to init a new git repository.
  2. Then you will need to add your project files. git add .
  3. Finally git commit -m "Inital" to commit you first changes.

So, those step setup the basics. As you go futher, you will get more files in your project. Remember to add and commit it as you go.

  1. Adding file to git: git add file
  2. Commit: git commit -m "message"

If you only change some file, you can use git commit -am "message" to make a fast commit. It saves all your time to add files that changes.

If you want to make it easier to share, you can init a bare repository git init --bare /path. This does not really contain your work file as usual. You have to clone and commit it if you want to modify the file.

Clone

If we want to get codes from other, we can use git clone git://... to clone other repository. and we can even clone our repository like git clone /path/to/project/ /path.

So, you might say, why would you do it. It is because you want to keep an deployment copy else where that does not change a lot.

Update

You got your copy of code, but you want to keep it up to date if you working with others. The command is git pull.

Simple Deployment

So you are done with you coding. Now, you want to deploy you code, you can try this gist. Add hooks that runs every commit.

Nginx reverse proxy solve no avaliable

Background

We use a main server reverse proxy all the request to other backend server. So, it means that the client access the main server, and nginx reverse proxy the request to the other serever. reverse proxy So, we use subdomain to dynamically resolve the address so it is working well when server ip changes.

Problem

Today, the nginx is not working as expected. It redirect all request to 502 page. And we found the error log.

2019/05/14 08:41:15 [error] 688#688: *4 connect() failed (110: Connection timed out) while connecting to upstream, client: 113.*.*.*, server: furry.top, request: "GET / HTTP/2.0", upstream: "https://116.*.*.*:443/", host: "furry.top", referrer: "https://furry.top/"
2019/05/14 08:41:15 [error] 688#688: *4 no live upstreams while connecting to upstream, client: 113.*.*.*, server: furry.top, request: "GET / HTTP/2.0", upstream: "https://https/", host: "furry.top", referrer: "https://furry.top/"

It is quite weird because when we reload the server it starts working. And then stop working after 10 seconds. We could not found why this happend.

Attempts

I assume that it is because of the unstable connection. So, I check append max_fails=0 config to ensure upstream does not fail. However, it gets worse. Nginx basically try to connect to backend continuously and not reply to any connection. However, when I curl backend address, it has successfully responsed. So nothing wrong with the connection but nginx some how cannot connect to the backend.

I have no idea why it happend. Then I check the connection with netstat -ap. One line draw my attention.

Proto Recv-Q Send-Q Local Address        Foreign Address   State     PID/Program name
tcp        0      1 main-server.a:36298  166.*.*.*:http    SYN_SENT  10128/nginx: worker

Since there is no respond from the address, it did not connect to correct backend address. But, why it only works for small period of time? Does it mean that the dns server did not work?

So I check in the configuration file again. I found resolver 114.114.114.114, this changes the DNS server. So, to verify the address. I use dig command, dig @114.114.114.114 furry.top and the result is that 114.114.114.114 is not working. So, I replace the DNS with 8.8.8.8 and the problem solved.

So, that’s why nginx fail to connect to backend but I cannot find any log indicate that the DNS failed. It is because I did not check the default logging file.

[error] 2439#2439: upstream-dynamic-servers: 'furry.top' could not be resolved (110: Operation timed out)

The answer was there.

Keycloak Docker setup and reverse proxy from nginx

Keycloak is an open source Identity and Access Management software that is part of Red Hat project. It provided OAuth and SSO support for your application and software. It is easy to set up, but you need to download the dependency and set up in the configuration file. We can use Docker to set this up easily.

From github repository, we can find the information about Keycloak for Docker. One of the simple setups is docker run -e KEYCLOAK_USER=<USERNAME> -e KEYCLOAK_PASSWORD=<PASSWORD> jboss/keycloak. Then it is going up and running. We can have a database connection when we set DB_VENDOR environment variable. There is a sample setup for Keycloak with MySQL. It is pretty easy with docker.

However, I do not need multiple MySQL instances running. I prefer a single MySQL service at my localhost. So, we need to connect host MySQL within the container. We have to locate the host IP address from the container. Docker used to provide host.docker.internal to resolve the host IP address. However, it resolves to nothing in Ubuntu docker. I found a simple workaround from here.

SETUP

Basically, it set the network mode to host, so localhost can access the actual machine. so I set it with docker-compose with network_mode: host. So the sample compose setup will look like this. With the file, we can set the database clear. The file is generated with jhipster, and I modify it to connect it to the database.

version: '2'
services:
  keycloak:
    image: jboss/keycloak:5.0.0
    network_mode: host
    command: ["-b", "0.0.0.0", "-Dkeycloak.migration.action=import", "-Dkeycloak.migration.provider=dir", "-Dkeycloak.migration.dir=/opt/jboss/keycloak/realm-config", "-Dkeycloak.migration.strategy=OVERWRITE_EXISTING", "-Djboss.socket.binding.port-offset=1000"]
    volumes:
      - ./realm-config:/opt/jboss/keycloak/realm-config
    environment:
      - PROXY_ADDRESS_FORWARDING=true
      - KEYCLOAK_USER=admin
      - KEYCLOAK_PASSWORD=admin
      - DB_VENDOR=mariadb
      - DB_DATABASE=keycloak
      - DB_USER=keycloak
      - DB_PASSWORD=passwd
    ports:
      - 9080:9080
      - 9443:9443
      - 10990:10990

It is important to add PROXY_ADDRESS_FORWARDING=true since we want to working with reverse proxy nginx.

We can then run the file with docker-compose -f keycloak.yml up. However, it does not work because of the message Failed to start service org.wildfly.network.interface.private. There is nothing I can find on the internet, but it works fine on my other computer. It should be a problem with docker in Ubuntu. So, I have used another way. Please comment if you anything related to the issue.

MYSQL

We have to know the IP address of the host in the container. So, I found this command ip route show | awk '/default/ {print $3}'. It will show the host IP in the docker. We can simply do it like docker run --rm alpine ip route show | awk '/default/ {print $3}', and we will see the IP in terminal. For me, the IP is 172.17.0.1. So I add DB_ADDR=172.17.0.1 into the environment. And let myself listen to IP address 0.0.0.0 then the container is allowed to connect to the service.

One more thing is the MySQL user. We can add a user with remote access, but it is not safe. So I add the user in MySQL with SQL command.

CREATE USER 'keycloak'@'172.*' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON `keycloak`.* TO 'keycloak'@'172.*';
FLUSH PRIVILEGES;

or we can simply do GRANT ALL PRIVILEGES ON keycloak.* To 'keycloak'@'172.*' IDENTIFIED BY 'password';. They both work well with mariadb-10.3.14.

NGINX

We want to proxy it with nginx so we do not need to convert certificate. It is easier to replace and do not stop the keycloak service. So, I obtain a certificate with certbot and activate it with Nginx configuration like this.

nginx host configuration

server {
    listen 80;
    listen [::]:80;
    server_name example.com;

    location / {
        rewrite ^ https://example.com$request_uri? permanent;
    }
}

server {
    server_name example.com; # managed by Certbot

    access_log /var/log/nginx/ide.mai1015.com-access.log;
    error_log /var/log/nginx/ide.mai1015.com-error.log;

    # SSL configuration

    listen 443 ssl http2;
    # listen [::]:443 ssl http2 ipv6only=on;

    # Note: You should disable gzip for SSL traffic.
    # See: https://bugs.debian.org/773332
    #
    # Read up on ssl_ciphers to ensure a secure configuration.
    # See: https://bugs.debian.org/765782
    #
    # Self signed certs generated by the ssl-cert package
    # Don't use them in a production server!
    #
    # include snippets/snakeoil.conf;

    # Add index.php to the list if you are using PHP
    index index.html index.htm index.nginx-debian.html;

    location / {
        proxy_pass http://localhost:9080;
        proxy_read_timeout 90;

        # proxy header
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Scheme $scheme;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;

        # ws
        proxy_http_version 1.1;
        #proxy_set_header Upgrade $http_upgrade;
        #proxy_set_header Connection 'Upgrade';

        # proxy_cache_bypass $http_upgrade;

        #proxy_redirect http://localhost:9080 https://example.com;
    }

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
    # include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    # ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

So those configurations are important to allow keycloak to work correctly. Without Host header will break the redirect_url.

    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Scheme $scheme;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header Host $host;

So we have a fully configurated keycloak working behind reverse proxy Nginx.

Ansible Automation for Server Deploy and Test

When talking about deploying a server, it is a pain to configure servers by hand. You have to type in commands line by line. Then edit the config file to make sure that works. If you need to configure all the time, it wastes a lot of time struggle and waits for server.

Then I look for a way to automate this process. I found a lot of program like chef, puppet, ansible, saltstack, terraform, cloudformation, etc. They are different but they serve one purpose, deploy servers for you.

First, I tried chef. It is cool, clear syntax, fast. I can always check if the setup was correct by config rules. However, after some day of experience, I give up. It has so many stuff to use, I have to keep search and try it out. If I am managing a lot of servers, I will definitely use it.

Then I found ansible. It is pretty easy to learn, the configuration file is using YMAL. One of the samples from documents.

---
- hosts: webservers
  vars:
    http_port: 80
    max_clients: 200
  remote_user: root
  tasks:
  - name: ensure apache is at the latest version
    yum:
      name: httpd
      state: latest
  - name: write the apache config file
    template:
      src: /srv/httpd.j2
      dest: /etc/httpd.conf
    notify:
    - restart apache
  - name: ensure apache is running
    service:
      name: httpd
      state: started
  handlers:
    - name: restart apache
      service:
        name: httpd
        state: restarted

It is easy to both read and writes. It enables you to config server via ssh using ansible in your computer.

I learn a few, so I will take some example and show how ansible works and automate the process.

After the installation guide, you will configure the host that you want to config in groups. We create the file /etc/ansible/hosts to declare individual servers or server group.

192.0.2.50

[group]
aserver.example.org
bserver.example.org

Or if you want to move the file location, you can create ~/.ansible.cfg and put in the following content. It specifies the location of the host file.

[defaults]
inventory = /path/to/hosts

And we can start with a playbook. A playbook is the configuration file that contains what servers to deploy and the tasks to do. We can start with nginx installation.

---

- name: ide source
  hosts: local
  tasks:
  - name: install nginx
    apt:
      name: 'nginx'
      update_cache: yes
      state: latest

This file will try to config all the server under local group and then use apt to install nginx to latest version after updated the cache. It is pretty clear. Or we can install multiple packages.

- name: Install common
  become: yes
  apt:
    name: ['git', 'gdb', 'build-essential']
    update_cache: yes
    install_recommends: no

After that, we can make sure that nginx is started.

- name: restart NGINX
  service:
    name: nginx
    state: started

So, that’s all the basic function that ansible able to do easily. It also works with loop, I can install a list of item of go with. I have to add environment variable so I just activate the .profile.

- name: install go ext
  become_user: ""
  shell: ". /.profile && go get -u "
  with_items:
    - github.com/stamblerre/gocode
    - github.com/uudashr/gopkgs/cmd/gopkgs
    - github.com/ramya-rao-a/go-outline
    - github.com/acroca/go-symbols
    - golang.org/x/tools/cmd/guru
    - golang.org/x/tools/cmd/gorename
    - github.com/rogpeppe/godef
    - github.com/zmb3/gogetdoc
    - github.com/sqs/goreturns
    - golang.org/x/tools/cmd/goimports
    - golang.org/x/lint/golint
    - github.com/alecthomas/gometalinter
    - honnef.co/go/tools/...
    - github.com/golangci/golangci-lint/cmd/golangci-lint
    - github.com/mgechev/revive
    - golang.org/x/tools/cmd/gopls
    - github.com/go-delve/delve/cmd/dlv

They are all useful and easy to maintain config.

And it provides role that allow reusable playbook. Then I just need to write

- name: ide source
  hosts: ideserv
  become: yes
  roles:
    - ide
    - web

Then run ansible-playbook coder.yml and it will try to log in and do all the stuff for me.