Bridge interface
Docker Container communication
In addition to the use between containers ip Outside communication , You can also communicate using container names .
docker 1.10 start , Embedded one DNS server.
dns The parsing function must be used in the custom network .
Use when starting the container --name Parameter specifies the container name .
No resolution in the default Bridge .
The communication between containers is divided into container interworking on a single node and container interworking across nodes .
Bridging between containers locally .
Containers in different sections are directly bridged when interworking . After the packet comes out of the container, it arrives at the gateway docker0, If you find that the traffic of this machine is the same network segment , Or did you leave docker0.
Containers of different network segments are isolated , By adding a network card to the container , Enter another network , Interworking between containers .
In addition to using in the container room ip Outside communication , You can also communicate using container names .ip Because it's dynamic , So it's not reliable , General use docker embedded DNS
server To do domain name resolution . Name at startup docker Hour , add to –name Parameter specifies the container name , Just apply it , The corresponding parsing record will be created .
With the dynamics of the container ip change , Will change dns Parsing records in .
server1 docker run -d --name vm1 --network my_net1 nginx #-d Go backstage docker run
-it --name vm2 --network my_net1 ubuntu ping vm1
jined Container is a special network mode
Use when creating containers –network=container:vm1 appoint .(vm1 Specified is the name of the running container )
docker run -it --name vm1 --network my_net1 ubuntu ip addr docker ps -a docker
container prune Recycling container . docker run -d --name nginx # -d Go backstage docker ps
# Show currently running containers brctl show # View Bridge br0 Bridge without resolution
If you want to have parsing records , Use custom network , I created it myself . default br Although there is bridging drive , But without parsing . Default bridge without resolution . docker network ls #
docker inspect demo # Metadata used to get containers or images Can view ip address docker run --rm -it --network
container:demo busybox ip addr busybox Container and previously opened demo Containers use the same ip, The ports monitored by these two containers cannot be duplicate .
--link Can be used to connect 2 Containers --link Format of : --link<name or id>:alias Container aliasing
name and id Is the source container's name and id,alias Is the source container in Link Alias under . This resolution is not passed DNS, But through host file . server1cat
/etc/hostsdocker run --rm -it --link demo:webserver busybox cat /etc/hosts ping
demoenv variable When demo of ip When changes occur , Parsing will also change , But the variable will not change . Reopen a terminal docker stop demo # Stop the previous demo
docker run -d --name demo2 nginx docker inspect demo2 # Found this demo2 Will put the previous ip The address is occupied .
docker start demo docker inspect demo # Reassigned another address . Follow the previous code cat /etc/hosts env
Container can go out , It is completely dependent on the host nat, namely snat, How many virtual networks are created , How many strategies will be customized .
Come in yes DNAT
server1 ps ax netstat -antlp docker ps docker rm -f demo docker rm -f demo2
docker ps -a docker run -d --name demo -p 80:80 nginx # Create a demo, Make a port mapping , use -p
iptables -t nat -nL Here you can see a DNAT Strategy of , When accessing the 80, Will be redirected to the container 80. From external network , Access to the 80 Hour ,
Will redirect to 02 of 80 upper . This is the port redirection mechanism . This is through a dual redundancy mechanism , As long as one is available , The network is unblocked .netstat -antlp
Once port mapping is turned on , Will generate a docker-proxydocker ps
Demonstrate dual redundancy mechanism .
NAT Transformation still depends on iptables
server1 iptables -t nat -D DOCKER 4 # eliminate docker This chain iptable -t nat -nL On other hosts curl netstat -antlp # see docker-proxy Progress of PID kill -9 5276# On other hosts curl 172.25
.0.1# Then there is a problem curl # This is OK Restore previous docker stop demo docker start demo
iptables -t nat -nLnetstat -antlp But if you first docker-proxy kill , It can still pass .
Rules for packet outgoing

Rules of entry , It is a double redundancy mechanism .

The Internet access container uses docker-proxy and iptables DNAT
The host machine accesses the local container using iptables DNAT
External host access containers or access between containers is docker-proxy realization

Local containers are forwarded by bridging .
brctl show route -n server1 docker ps docker inspect demo docker rm -f demo

server2 docker ps ip addr ip Or from 02 start docker run -d --name demo nginx docker
inspect demodocker rm -f demo
B, Cross host container network
Choose the technology stack of most cities
Cross host network solution
docker Native overlay and macvlan
Third party flannel\weave\calico
How are many network solutions related to docker Integrated together ?
libnetwork docker Container Network Library
CNM(container network model) This model abstracts the container network
CNM There are three types of components ( Developers can watch this more , Available for operation and maintenance personnel )
sandbox: Container network stack , Include container interface ,dns, Routing table
endpoint: The function is to sandbox Access network(veth pair)
network: Contains a group endpoint, same network of endpoint Can communicate .

macvlan Network solution implementation
linux kernel A network card virtualization technology provided by
No need linux bridge, Direct use of physical interfaces , Excellent performance .
In two docker Add a network card to the host , Enable network card hybrid mode .
server1 ip addr docker network ls docker network prune# Delete unused networks docker network
ls server2 ip addr docker network ls docker network prune# Delete unused networks docker
networkls server1 and server2 The following operations are performed on ip link set eth0 promisc on Turn on hybrid mode ip addr
The following parameters are required ,

Network card must up

docker network create -d macvlan --subnet --gateway -
oparent=eth0 mynet1 #-d Specify drive type ;--subnet Subnet ; -o Specify physical interface ; name docker network ls docker
network inspect mynet1docker run -it --name demo1 --network mynet1 --ip 172.25
.0.10 busybox#macvlan of ip Assigned by oneself , It has nothing to do with containers ip addr docker ps ip addr brctl show
docker attach demo1
Solve the interworking between two different containers
server2 docker network create -d macvlan --subnet --gateway -o parent=eth0 mynet1 Specify parent class docker inspect mynet1 docker pull busybox
docker run --rm -it --network mynet1 --ip busybox ip addr ping server1 docker attach demo1 ping
macvlan Network structure analysis
No new linux bridge
The interface of the container is directly connected to the host network card , No need NAT Or port mapping .
macvlan Will monopolize the host network card , But it can be used vlan Multiple sub interface implementations macvlan network .
vlan The physical layer-2 network can be divided into 4096 Logical networks , Isolated from each other .
Need to create more networks , Network card can be added ,
server1 A network card has been added docker ps docker network ls ip addr ip link set up dev eth1
Activate network card device ip addr ip link set eth1 promisc on ip addr docker network create -d
macvlan --subnet172.30.0.0/24 --gateway -o parent=eth1 mynet2 docker
network inspect mynet2docker run --rm -it --network mynet2 busybox ip addr
docker run --rm -it --network mynet2 --ip busybox ip addr
because macvlan Need to establish network card , unreasonable . Because the hardware level is limited .
server1 docker ps -a docker network create -d macvlan --subnet
--gateway172.40.0.1 -o parent=eth1.1 mynet3 docker network create -d macvlan
--subnet172.50.0.0/24 --gateway -o parent=eth1.2 mynet4 docker
network inspectls docker network inspect mynet3 docker network inspect mynet4 ip
addr Create different virtual networks through sub interfaces . Method used , You can assign directly as an administrator .
macvlan Isolation and connectivity between networks
macvlan The network is isolated on the second layer , So different macvlan The container of the network cannot communicate . If you want to be different macvlan The container of the network can communicate , add adapter .
The gateway can be used to macvlan Network connectivity .
docker There are no restrictions , image vlan Just manage the network like that .
docker network connect
overlay Don't care so much about the bottom , Need more subnets .

©2019-2020 Toolsou All rights reserved,
Solve in servlet The Chinese output in is a question mark C String function and character function in language MySQL management 35 A small coup optimization Java performance —— Concise article Seven sorting algorithms (java code ) use Ansible Batch deployment SSH Password free login to remote host according to excel generate create Build table SQL sentence Spring Source code series ( sixteen )Spring merge BeanDefinition Principle of Virtual machine installation Linux course What are the common exception classes ?