- 2020-08-01 08:23
*views 2*- neural network
- and
- Joint
- Deep
- Unsupervised
- Representations
- computer vision
- of
- Learning
- Deep learning

<>Joint Unsupervised Learning of Deep Representations and Image Clusters

<>Abstract

Proposed JULE model for deep representations and image clusters Learning together framework.

Here it is framework in , In a clustering algorithm, continuous processing is processed into repeated steps . And connect another one CNN

（ Core original sentence ：In our framework, successive operations in a clustering algorithm are

expressed as steps in a recurrent process, stacked on top of representations

output by a Convolutional Neural Network (CNN))

Finish reading paper You should understand .

<>Introduction

given nsn_sns individual images I={I1,...Ins}\boldsymbol{I} = \{I_1, ... I_{n_s}\}I={I1,..

.Ins}, The global optimization objective should be ：

argminy,θL(y,θ∣I)(1) \underset{\boldsymbol{y},

\boldsymbol{\theta}}{\operatorname{argmin}} \mathcal{L}(\boldsymbol{y},

\boldsymbol{\theta} \mid \boldsymbol{I}) \tag{1}y,θargminL(y,θ∣I)(1)

among ：

* L\mathcal{L}L It's a loss function

* y\boldsymbol{y}y It's all image in cluster Of id ( author ： Since it is unsupervised, Why would there be cluster

ids? If it's just image id, Up there I\boldsymbol{I}I This has been illustrated )

* θ\boldsymbol{\theta}θ It's a trainable parameter

The optimization process can be divided into the following two steps ：

argminyL(y∣I,θ)(2a) \underset{\boldsymbol{y}}{\operatorname{argmin}}

\mathcal{L}(\boldsymbol{y} \mid \boldsymbol{I}, \boldsymbol{\theta}) \tag{2a}yar

gminL(y∣I,θ)(2a)

argminθL(θ∣I,y)(2b) \underset{\boldsymbol{\theta}}{\operatorname{argmin}}

\mathcal{L}(\boldsymbol{\theta} \mid \boldsymbol{I}, \boldsymbol{y}) \tag{2b}θar

gminL(θ∣I,y)(2b)

It's natural formula 2a It is a simple clustering problem , formula 2b It is a supervised representation learning problem .

Therefore, this paper proposes an option between the two formulas . Optimizing clustering by representation learning id, By clustering id To optimize the parameters .（ How do you feel self-supervised That's the way .

use HAC The reason of clustering ：

* Start with over clustering （ That is, every one of them sample All represent a cluster category .

This is better when representation learning is not good — This is the time CNN It has not been well studied . Blame him , He's retraining one CNN Of , It's not for use pretrained)

* With better representation learning , Those in the subsequent clustering process can be merged .

* HAC It's an iterative process , A good framework to adapt to iterative cycles .

This is the basic process ,simple but effective end to end learning framework.

The point is ：

* end to end

* Unlabeled data

This is the specific process , You can see in the picture ttt round ,

A combination of red and yellow image. then bp Optimize CNN, Then go to the next one step Combined two green and one pink , And then in the bp optimization CNN.. This process is iterative . It's done .

It's easy to understand workflow.

Technology

Daily Recommendation

views 26

©2019-2020 Toolsou All rights reserved,

hive compress &&hdfs Merge small files I've been drinking soft water for three years ? What is the use of soft water and water softener 《 League of Heroes 》 Mobile game open public test ： support iOS/Android Dual platform The situation of receiving and receiving multi-path VaR - Value at risk - Monte Carlo method - Pythonnumpy： The creation of multidimensional array uniapp Summary of page value transfer use C Language makes a very simple airplane game Docker Container data volume ,DockerfileJavaScript Medium Call and Apply