페이지

2019년 11월 20일 수요일

Microservice deployment to Kubernetes

https://github.com/karthequian/wishlist

# Deploying our application to Kubernetes

We're ready to deploy our application to Kubernetes, but let's take a look at our assets.

## Goals:
1. View our sample application and containers
2. Take a look at our deployment file
3. Take a look at our alternate deployment file
4. Deploy our application into kubernetes and verify we can see our API's working.

## Goal 1
View the sample application here:

## Goal 2
To view the deployment file, take a look at wishlist-deployment.yaml

## Goal 3
To see another way to run the microservices, take a look at wishlist-deployment-alernate.yaml

## Goal 4
To run the microservice described in goal #1, from the current directory, run:

`kubectl create -f wishlist-deployment.yaml`

To verify that the deployment is online:
`kubectl get deployments`

To verify that the replica sets are running:
`kubectl get rs`

To verify that the pods are running:
`kubectl get pods`

To see the services:
`kubectl get services`

To interact with your API's in the minikube environment:
`minikube service wishlist-service`



# Wishlist deployment yaml
kind: Deployment
apiVersion: apps/v1
metadata:
  name: wishlist-deployment
  labels:
    app: wishlist
spec:
  replicas: 3 #We always want more than 1 replica for HA
  selector:
    matchLabels:
      app: wishlist
  template:
    metadata:
      labels:
        app: wishlist
    spec:
      containers:
      - name: wishlist #1st container
        image: karthequian/wishlist:1.0 #Dockerhub image
        ports:
        - containerPort: 8080 #Exposes the port 8080 of the container
        env:
        - name: PORT #Env variable key passed to container that is read by app
          value: "8080" # Value of the env port.
      - name: catalog #2nd container
        image: karthequian/wishlist-catalog:1.0
        ports:
        - containerPort: 8081
        env:
        - name: PORT
          value: "8081"
      - name: auth #3rd container
        image: karthequian/wishlist-auth:1.0
        ports:
        - containerPort: 8082
        env:
        - name: PORT
          value: "8082"
---
kind: Service
apiVersion: v1
metadata:
  name: wishlist-service
  namespace: default
spec:
  type: NodePort
  selector:
    app: wishlist
  ports:
  - name: wishlist-port
    protocol: TCP
    port: 8080
  - name: wishlist-auth-port
    protocol: TCP
    port: 8081
  - name: wishlist-catalog-port
    protocol: TCP
    port: 8082



2019년 11월 17일 일요일

Large Enterprise Application Experience

1. My Early Enterprise Experience
- Team built one big java WAR file
- Ops team deployed to the dev WebLogic Server
- Local development on WebLogic Servers run on own machines
- Environment set up to use the development database via data sources.


2. Ecommerce Catalog
- View the list of current products
- Understand their specifications
- Save into a with lisht that enables team collaboration
- Get a quote for the with list or go through the purchase process.

Three Modules
1) User Module
Responsible for user management, login, and profile management

2) Catalog Module
List of products in the company catalog

3) Wish List Module
API that the customers will user to create and view their wish lists

Our Monolith
- A single WAR file, and has one or more JAR files that provide all of the core functioinality
- Will likely have a common web tier to handle web requests and responses

Breaking This Up
Break the single application into three microservices.
- Auth
- Wish list functionality
- The catalog portion


Benefits of this Structure
- Each microservice has its own Rest APIs that can be userd by the other services

- Reorganize codebase in a more logical way

- Development team can split into smaller feature teams to work on th individual microservices.

- Code and API are more visible to business team

- Individual teams can develop, build, and test their applications locally-team is more productive up front

- Enables containuous integration pipeline

Containers allow you to user Kubernetes.

Transitioning to Microservices
- Addressed some of the microservice building block concrns
- Applied some of the basic principles from the twelve-factor pattern

Our microservices are running in Docker containers, so  let's Kube this up.
















2019년 11월 16일 토요일

Microservice Patterns in Kubernnetes

1. Goal
1.1. Build reference architecture for microservices for Kubernetes

2. Architecture Groupings

2.1. Basic building block architectures
2.1.1. Deployment Patterns
2.1.2. Runtime patterns


3. Building Blocks

3.1. Covers the initial concerns
3.1.1. Codebase
3.1.1.1. Code stored in source control
- Example: Git, Perforce
3.1.1.2. Container images stored in an image repository
- Example: Docker Hub, Artifactory

3.1.1.3. Codebase Workflow
3.1.1.3.1. Push code to your source control.
3.1.1.3.2. Automated build is kicked off to build and run tests against code.
3.1.1.3.3. Bild container image; push to image repository.

3.1.1.4. Image Repository
3.1.1.4.1. Stores your code in an image
3.1.1.4.2. Need to decide early on
3.1.1.4.3. Follow your company guidelines

3.1.2. Dependencies

3.1.2.1. Applications modeled in Kubernetes as Deployments and pods
3.1.2.2. Single pod can have many containers inside
3.1.2.3. Commonly seen in Kubernetes: sidecar pattern

3.1.3. Dev/staging/production parity

3.1.3.1. Dev versus Prod in Kubernetes
3.1.3.1.1. Common Kubernetes deployment patterns
- Small footprint: different namespaces with different credentials for dev, staging, and production

- Large footprint: unique Kubernetes installation for dev, staging, and production


3.1.4. Admin processes.

3.1.4.1. Admin process containers tagged in a similar way to the running application
3.1.4.2. Containers run as Kubernetes jobs/chron job
3.1.4.3. Also run as a separate deployment


4. Deployment Patterns


4.1 Patterns around aplication deployment
4.1.1. Application configurations
4.1.1.1. Applications always have associated configuration to make the application work as expected

4.1.1.2. Application Configuration in Kubernetes
4.1.1.2.1. Two ways to store configs
- configMaps: for generic information(example: metadata, version)
- Secrets: for sensitive data(example:passwords)

4.1.1.2.2. Loaded into the pod via
- Environment variables
- Files


4.1.1.2.3. A Word on Secrets
- Secrets are a good start, but depending on the use case, might need something more like ashiCoprp Vault

4.1.2. Build, release, run
- Tag containers at build time with explicit version
- Don't use latest ta for production containers

4.1.2.1. Running in Kubernetes
- High-level constructs to run containers
- Deployments, DaemonSets, ReplicaSets
- Package management provided by Helm
- Adds revision control


4.1.3. Processes
4.1.3.1. Processes and Port Bindings
4.1.3.1.1. Processes
- Keep application stateless
- Goal: Alllow request to go to any container or server by default

4.1.3.1.2. Statelessness in Kubernetes
- Translated to Deployments and pods
- Deployments comprised of ReplicaSets, which are a collection of one or more pods

4.1.3.1.3. Word on StatefulSets
- Typically used to create persistent storage systems like a MySQK shard

4.1.3.2. Port Bindings
- Containers are implemented in pods
- Communicate to each other via well-defined ports



5. Runtime Patterns
Associating Resources in Kubernetes
- Everything is treated as a service, and configurations are stored in a ConfigMap or Secret

Scenario: Replace a Link to a Database

1) Create new database; ensure it's online and ready to accept connections.
2) Update configuration stored in the ConfigMaps or Secrets.
3) Kill the pods that were communicating with the old database.

When Kubernetes starts up the new pods, the new pods automatically pick up the new configuration, and you'll be using the new service.

If a pod is taking too much load or is receiving a lot of requests, it's easy to scale the number of pods in Kubernetes by adding more to the replica set.

Scaling Kubernetes to handle more traffic is one of the strengths of the platform.



5.1. Patterns for live running systems
5.1.1. Backing services
5.1.2. Features around concurreny
5.1.3. Disposability
The abliity to maximize robustness with fast startup and graceful shutdown

Containers
- Start up fast and run efficiently
- Have all required tooling built inside of the container image

Containers to Pods to Replicas
- Kubernetes manages your containers in pods, which are managed with ReplicaSets
- If one of the pods goes offline, the ReplicaSet will automatically start up a new pod to take its place

Advantages to Users
- From a user perspective, the application will still function, and the user won't see any downtime

5.1.4. Log management

5.1.4.1. Logging
- Treat logs as streams: execution environment handles the logs
- Common to user a log router (Beats/Fluentd) to save the logs to a service(Elasticsearch/ Splunk)
- Kubernetes makes this process easy


6. Some Assembly Required

6.1 Basic Knowledge of Kubernetes required
6.2. Watch the Learning Kubernetes course in the library if needed











2019년 9월 8일 일요일

Recurrent Neural Networks

-> google translator relies heavily on recurrent neural networks
-> we can use recurrent neural networks to make time series analysis(주식 분석)

Turing-Test: a computer passes the Turing-test if a human is unable to distinguish the computer from a human in a blind test

~ recurrent neural networks are able to pass this test: a well-trained recurrent network is able to,, understand" English for example

LEARN LANGUAGE MODELS!!

We would like to make sure that the network is able to learn connections in the data even when they are far away from each other

,, I am from Hungary. Lorem ipsum dolor sit amet, consetetur adipiscing elit, sed do eiusmod tempor incididnt

Recurrent neural networks are able to deal with relationships for away from eah other

~it is able to guess the last word: humgarian


Combining convolutional neural networks with recurrent neural networks is quite powerful

~ we can generate image descriptions with this hibrid approach


With Multilayer Neural Networks ( or deep networks) we make predictions independent of each other
p(t) is not correlated with p(t-1) or p(t-2)...

-> training examples are independent of each other
               Tigers, elephants, cats ..  nothing to do with each other

THESE PREDICTIONS ARE INDEPENDENT !!!!

With Recurrent Neural Networks we can predict the next word in a given sentence:
     it is important in natural language processing ~or we want to predict the stock prices tomorrow

p(t) depends on p(t-1), p(t-2)....

TRAINING EXAMPLES ARE CORRELATED!!!


x: input
h: activation after applying the activation function on the output

How to train a recurrent neural network?
~we can unroll it in time in order to end up with a standard feedforward neural network:
we know how to deal with it

How to train a recurrent neural network?

~ we cna unroll it in time in order to end up with a standard feedforward neural network: we know how to deal with it

As you can see, serveral parameters are shared accross every single layer!!!

for a feed-forward network these weights are different

Vanishing/exploding gradients problem
When dealing with backpropagation we have to calculate the gradients

~ we just have to apply the chain rule several times
We multiply the weights several times: if you multiply x < 1 several times the result will get smaller and smaller

VANISHING GRADIENT PROBLEM

Backpropagation Through Time(BPTT): the same as backpropagation but these gradients/error signals will also flow backward from future time-steps to current time-steps

We multiply the weights several times: if you multiply x > 1 sereral times the result will get bigger and bigger

It is a problem when dealing with Recurrent Neural Networks usually
~because these networks are usaually deep!!!

-> why vanishing gradient is problem?
Because gradients become too small: difficult to model long-range dependencies

-> for recurrent neural networks, local optima are a much more significant problem than with feed-forward neural networks
~ error function surface is quite complex

These complex surfaces have several local optima and we want to find the global one: we can use meta-heuristic approaches as well

EXPLODING GRADIENTS PROBLEM
-> truncated BPTT algrithm: we use simple backpropagation but
We only do bckpropagation through k time-steps

-> adjust the learning rate with RMSProp(adaptive algorithm)
We normalize the gradients: using moving average over the root mean squared gradients

VANISHING GRADIENTS PROBLEM
-> initialize the weights properly(Xavier-initialization)
-> proper activation functions such as ReLU function
-> using other architectures:LSTM or GRUs






















2019년 7월 16일 화요일

(1) 위키피디아 한국어 덤프 파일 다운로드

https://dumps.wikimedia.org/kowiki/latest/

(2) 위키피디아 익스트랙터 다운로드

해당 파일을 모두 다운로드 받았다면 위키피디아 덤프 파일을 텍스트 형식으로 변환시켜주는 오픈소스인 '위키피디아 익스트랙터'를 사용할 것입니다.
'위키피디아 익스트랙터'를 다운로드 받기 위해서는 윈도우의 명령 프롬프트나 MAC과 리눅스의 터미널에서 아래의 git clone 명령어를 통해 다운로드 받을 수 있습니다.
git clone "https://github.com/attardi/wikiextractor.git"  

(3) 위키피디아 한국어 덤프 파일 변환

위키피디아 익스트랙터와 위키피디아 한국어 덤프 파일을 동일한 디렉토리 경로에 두고, 아래 명령어를 실행하면 위키피디아 덤프 파일이 텍스트 파일로 변환됩니다. 컴퓨터마다 다르지만 보통 10분 내외의 시간이 걸립니다.
python WikiExtractor.py kowiki-latest-pages-articles.xml.bz2  



(4) 훈련 데이터 만들기


우선 AA 디렉토리 안의 모든 파일인 wiki00 ~ wiki90에 대해서 wikiAA.txt로 통합해보도록 하겠습니다.

[root@centos7-66 text]# cat AB/wiki* > ./wikiAB.txt
[root@centos7-66 text]# cat AC/wiki* > ./wikiAC.txt
[root@centos7-66 text]# cat AD/wiki* > ./wikiAD.txt
[root@centos7-66 text]# cat AE/wiki* > ./wikiAE.txt
[root@centos7-66 text]# cat AF/wiki* > ./wikiAF.txt
[root@centos7-66 text]# cat AG/wiki* > ./wikiAG.txt


[root@centos7-66 text]# cat ./wikiA* > ./wiki_data.txt









2019년 7월 15일 월요일

mac webview debug

1. 우선 아이폰에서 설정 -> 사파리 -> 고급 -> 웹 속성을 온 해주세요.

2. 맥과 아이폰을 연결 하고, 사파리를 실행시킵니다.

3. 사파리 메뉴에서 개발자용 -> 아이폰 기기명이 정상적으로 나오면 위 1번 설정이 정상적으로 된 것입니다.
(메뉴 막대에서 개발자용 메뉴가 보이지 않는 경우 Safari > 환경설정을 선택하고 고급을 클릭한 다음 ‘메뉴 막대에서 개발자용 메뉴 보기’를 선택하십시오.)


4. 디버그 하고 싶은 웹을 위해 앱을 실행 시키면 3번 이미지에서 앱 이름 - 웹 페이지 가 검색이 되실 겁니다.

5. 웹 페이지를 클릭하면 아래와 같이 개발자 모드 창이 나타납니다.

6. 요소, 네트워크, 리소스, 콘솔, 저장 공간 등을 이용해서 웹 디버깅을 하시면 됩니다.