push the spark_noetic

This commit is contained in:
litian.zhuang 2021-03-04 17:23:09 +08:00
commit 3fed156a87
1235 changed files with 456957 additions and 0 deletions

26
.gitignore vendored Executable file
View File

@ -0,0 +1,26 @@
*~
*.swp
*.user
*.autosave
CMakeLists.txt.user
qtcreator-*
*.user
*.bat
*.d
*.g
*.ldata
*.log
intel
.catkin_workspace
>>>>>>> fsm_ngh_beta
build-simple_goals-Desktop-Default/
build-*/
build
devel
install
*.pyc
*.orig
.tmp
/spark_tutorials/spark_iflytek/msc/res/asr/GrmBuilld/
tensorflow

98
README.md Executable file
View File

@ -0,0 +1,98 @@
# NXROBO Spark
<img src="http://wiki.ros.org/Robots/Spark?action=AttachFile&do=get&target=spark1.png" width="300">
## 说明 Description
- This is a tutorial for beginners, and Detailed version is [here](https://github.com/NXROBO/spark/blob/master/README_Detailed.md) .
- 本说明为初学者体验版,[这里](https://github.com/NXROBO/spark_noetic/blob/master/README_Detailed.md)有详细说明的版本。
## 列表 Table of Contents
* [功能包说明packages-overview](#功能包说明packages-overview)
* [使用usage](#使用usage)
* [视频展示Video](#视频展示Video)
## 功能包说明packages-overview
* ***src*** : Spark的源代码包括底层配置硬件驱动和各个应用功能包等。
* ***doc*** : 软硬件依赖包。
## 使用usage
### 系统要求 Prequirement
* System: Ubuntu 20.04+
* ROS Version: noetic(Desktop-Full Install)
### 下载安装 Download and install
* 下载工作空间 Download the workspace:
```yaml
git clone https://github.com/NXROBO/spark_noetic.git
```
* 安装依赖库 Install libraries and dependencies:
```yaml
cd spark_noetic
```
### 编译运行 compile and run
```yaml
catkin_make
```
* 如果编译一切正常可根据提示运行相关例程。If everything goes fine, test the examples as follow:
```yaml
./onekey.sh
```
## 视频展示Video
1.Spark跟随 Spark-Follower
<a href="https://www.youtube.com/embed/UrD2AEQ3VkI" target="_blank"><img src="http://img.youtube.com/vi/UrD2AEQ3VkI/0.jpg"
alt="follow-person" width="240" height="180" border="10" /></a>
```yaml
cd spark
source devel/setup.bash
roslaunch spark_follower bringup.launch
```
2.Spark建图 Spark-SLAM-Mapping
<a href="https://www.youtube.com/embed/Yt9Sld-EX0s" target="_blank"><img src="http://img.youtube.com/vi/Yt9Sld-EX0s/0.jpg"
alt="follow-person" width="240" height="180" border="10" /></a>
```yaml
cd spark
source devel/setup.bash
roslaunch spark_slam 2d_slam_teleop.launch slam_methods_tel:=gmapping
```
3.Spark导航 Spark-Navigation
<a href="https://www.youtube.com/embed/3RP11sZKfJg" target="_blank"><img src="http://img.youtube.com/vi/3RP11sZKfJg/0.jpg"
alt="follow-person" width="240" height="180" border="10" /></a>
```yaml
cd spark
source devel/setup.bash
roslaunch spark_navigation amcl_demo_lidar_rviz.launch
```
4.Spark-RtabMap建图 Spark-RtabMap-Mapping
<a href="https://www.youtube.com/embed/K5wvlWb-2uQ" target="_blank"><img src="http://img.youtube.com/vi/K5wvlWb-2uQ/0.jpg"
alt="follow-person" width="240" height="180" border="10" /></a>
```yaml
cd spark
source devel/setup.bash
roslaunch spark_rtabmap spark_rtabmap_teleop.launch
```
5.Spark机械臂视觉抓取 Spark-Carry_Object
<a href="https://www.youtube.com/embed/aNPy6GYcdu0" target="_blank"><img src="http://img.youtube.com/vi/aNPy6GYcdu0/0.jpg"
alt="follow-person" width="240" height="180" border="10" /></a>
```yaml
cd spark
source devel/setup.bash
roslaunch spark_carry_object spark_carry_object_only_cv3.launch
```

478
README_Detailed.md Executable file
View File

@ -0,0 +1,478 @@
# Spark
This repository contains the ROS wrapper of Sparks's driver plus various ROS applications.This is a meta-package.
## Table of Contents
* [Update Log](#update-log)
* [Packages Overview](#packages-overview)
* [Usage](#usage)
* [Mirror](#mirror)
* [Routine](#routine)
* [Spark-Follower](#spark-follower)
* [Spark-SLAM Mapping](#spark-slam-mapping)
* [Spark-Navigation](#spark-navigation)
* [RTABMap-DeepCamera-Mapping](#rtabmap-deepcamera-mapping)
* [Spark-RTABMap-Mapping-Navigation](#spark-rtabmap-mapping-navigation)
* [Routine (Chinese Version)](#routine-cn)
* [Spark-跟随演示](#spark-跟随演示)
* [Spark-SLAM地图构建](#spark-slam地图构建)
* [Spark-自动导航](#spark-自动导航)
* [RTABMap深度相机手持建图](#rtabmap-深度相机手持建图)
* [Spark-RTABMap建图与导航](#spark-rtabmap建图与导航)
## Update Log
* Raise the stack so that the lidar can be added on it.
* Update the pre install packages so that the navigation and gmapping can be run.
## Packages Overview
* ***src*** : spark driver including base driver, camera driver, robot description, teleop package, and follow person package and so on.
* ***tools*** : it contains the 3rd part openni2 driver which camera driver uses.
* ***doc*** : it shows that how to compile and use this meta-package.
## Usage
### Prequirement
* System: Ubuntu 14.04+
* ROS Version: Indigo(Desktop-Full Install)
### Compile
Build this compile with the following steps:
```yaml
git clone https://github.com/NXROBO/spark.git
#install
cd spark
./onekey.sh
```
If everything goes fine, test the follow-person example as follow:
```yaml
./install/follow_run.sh
```
# Mirror
We also provide a downloadable mirror whose all environments have been configured.
* Download address: [spark_mirror](http://pan.baidu.com/s/1i4ZlH4p)
# Routine
## Spark-Follower
### Introduction
* Spark will follow the object in front of itself and keep a certain distance.
### Procedure
* Ensure spark base or camera are well connected with laptop.
* Configure the workspace
```yaml
cd ~/spark_ws
source devel/setup.bash
```
* Host computer, launch follower file
```yaml
roslaunch spark_follower bringup.launch
```
* Start following
```yaml
Move Spark to open place, stand in front of spark. Spark will take corresponding action when you move back and forth or move left and right.
You should control the moving speed and ensure that there doesn't exist too many objects around you. Otherwise, the following effect will not be guaranteed.
```
## Spark-SLAM-Mapping
### Introduction
* Introduce the how to realize Gmapping, Hector, Karto, and Frontier Exploration algorithm on Spark for map construction
* Different algorithms apply to different situation, users can select appropriate method depending on actual tasks.
### Procedure
* Default parameters are provided by ROS official website, which can be adapted for various requirements.
* Ensure spark base or camera are well connected with laptop.
* Configure the workspace
```yaml
cd ~/spark_ws
source devel/setup.bash
```
* Host computer, launch SLAM file
```yaml
Based on 2d Lidar:
roslaunch spark_slam 2d_slam.launch slam_methods:=gmapping
No Rviz:
roslaunch spark_slam 2d_slam_norviz.launch slam_methods:=gmapping
Based on camera:
roslaunch spark_slam depth_slam.launch slam_methods:=gmapping
No Rviz:
roslaunch spark_slam depth_slam_norviz.launch slam=methods:=gmapping
```
* Host computer, New terminal, launch keyboard control
```yaml
rosrun spark_teleop spark_teleop_node 0.25 0.5
0.25 is linear velocity0.5 is angular velocitySpark can be controlled by WASD.
```
* Host computer, New terminal, launch map saver
```yaml
rosrun map_server map_saver -f ~/my_map
```
### Addition
```yaml
Note: Spark supports various SLAM methods
1. Spark supports GmappingHectorKarto and Frontier Exploration.
2. Users can change slam_methods:=xxxx to choose methods, default is gmapping
3. The value of 'slam_methods' includes gmapping, hector, karto, frontier_exploration
4. For exampleif you want to use Karto Slamselect following instruction
roslaunch spark_slam 2d_slam.launch slam_methods:=karto
```
```yaml
Note: Install corresponding package or download source code
For Gmapping:
Gmapping has been installed in install.sh
For Hector Mapping:
sudo apt-get install ros-indigo-hector-mapping
For Frontier Exploration:
sudo apt-get install ros-indigo-frontier-exploration ros-indigo-navigation-stage
For Karto Mapping:
sudo apt-get install ros-indigo-slam-karto
```
## Spark-Navigation
### Introduction
* Given the map about surrounding environment, Spark can realize the automatic navigation and avoid static or active obstacles.
### Procedure
* Ensure spark base or camera are well connected with laptop.
* Configure the workspace
```yaml
cd ~/spark_ws
source devel/setup.bash
```
* Host computer, launch navigation file
```yaml
Based on 2d Lidar:
roslaunch spark_navigation amcl_demo_lidar.launch map_file:=home/username/my_map.yaml (username is the name of host computer, map_file demonstrates the position of the map your construct and the name of your yaml file. )
Based on camera:
roslaunch spark_navigation amcl_demo.launch map_file:=home/username/my_map.yaml
"odom received!" indicates that the navigation initializes successfully.
```
* In Rviz, use 2D Pose Estimate to estimate the rough position of robot, and hold the left mouse to determine the robots orientation.
* After pose estimationuse 2D Nav Goal to specify the goal and final orientation of robot.
## RTABMap-DeepCamera-Mapping
### Introduction
* RTAB-Map: Real Time Appearance-Based Mapping
* RTAB-Map is a RGB-D SLAM method based on global close-loop detection with real-time constraints.
* The method can generate 3D point cloud about surroundings.
### Procedure
* Ensure spark base or camera are well connected with laptop.
* Configure the workspace
```yaml
cd ~/spark_ws
source devel/setup.bash
```
* Host computer, launch depth camera
```yaml
roslaunch spark_rtabmap camera.launch
```
* Host computer, new terminal, launch mapping
```yaml
roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start"
P.S. "--delete_db_on_start" is used to clear the old database.
```
* Move the camera slowly and stably to construct the map
### Addition
```yaml
Note: Install rtabmap package in advance
sudo apt-get install ros-indigo-rtabmap-ros
```
## Spark-RTABMap-mapping-navigation
### Introduction
* Use rtabmap-ros package to construct map and navigate on Spark
* The method can generate 3D point cloud and 2D occupied grid map about surroundings.
### Procedure
* Ensure spark base or camera are well connected with laptop.
* Configure the workspace
```yaml
cd ~/spark_ws
source devel/setup.bash
```
* Host computerlaunch spark and rtabmap related node
```yaml
roslaunch spark_rtabmap rtab_demo.launch
```
* Host, new terminal, launch mapping
```yaml
roslaunch spark_rtabmap mapping.launch
P.S. If there exists an error indicates that rtabmap node cannot be found in rtabmap_ros
source /opt/ros/indigo/setup.bash
```
* Host, new terminal, launch keyboard control
```yaml
rosrun spark_teleop spark_teleop_node 0.25 0.5
0.25 is linear velocity0.5 is angular velocitySpark can be controlled by WASD.
```
* Host, new terminal, launch localization and navigation
```yaml
roslaunch spark_rtabmap mapping.launch localization:=true
```
* In Rviz, use 2D Pose Estimate to estimate the rough position of robot, and hold the left mouse to determine the robots orientation.
* After pose estimationuse 2D Nav Goal to specify the goal and final orientation of robot.
## Addition
```yaml
Note: Install rtabmap package in advance:
sudo apt-get install ros-indigo-rtabmap-ros
map data is saved in ~/.ros/rtabmap.db
```
# Routine-CN
## Spark-跟随演示
### 说明
* Spark的跟随演示会以它面前的一个目标为中心并保持一定的距离如果太近即主动后退到适当距离如果太远就主动跟上。
### 步骤
* 确保Spark底盘、摄像头与主机连接良好
* 进行工作空间的配置
```yaml
cd ~/spark_ws
source devel/setup.bash
```
* 主机,启动跟随文件
```yaml
roslaunch spark_follower bringup.launch
```
* 开始跟随
```yaml
把Spark搬到开阔的地方站在Spark前面前后移动Spark会根据摄像头获得信息进行相应的移动左右移动时Spark也会进行旋转但左右移动速度不应过快。不要有太多物体在人的周围Spark跟随的物体可能会改变。
```
## Spark-SLAM地图构建
### 说明
* 介绍Spark如何通过Gmapping Hector SLAM Karto SLAM以及Frontier Exploration等方法实现地图构建
* 不同的建图方法可适用于不同的场景中,用户可根据自己的需求选择
### 步骤:
* 默认参数由ros官方的package提供可自行修改以适应不同的情况
* 确保Spark底盘、Lidar或摄像头与主机连接良好
* 进行工作空间的配置
```yaml
cd ~/spark_ws
source devel/setup.bash
```
* 主机启动SLAM建图
```yaml
基于2d lidar的建图:
roslaunch spark_slam 2d_slam.launch slam_methods:=gmapping
不启动Rviz:
roslaunch spark_slam 2d_slam_norviz.launch slam_methods:=gmapping
基于camera的建图:
roslaunch spark_slam depth_slam.launch slam_methods:=gmapping
不启动Rviz:
roslaunch spark_slam depth_slam_norviz.launch slam=methods:=gmapping
```
* 主机,新终端,启动键盘控制
```yaml
rosrun spark_teleop spark_teleop_node 0.25 0.5
0.25为线速度0.5为角速度用户可通过WASD控制Spark移动
```
* 主机,新终端,保存地图
```yaml
rosrun map_server map_saver -f ~/my_map
```
### 补充
``` yaml
Note: Spark支持多种SLAM建图方法
1. Spark支持GmappingHectorKarto以及Frontier Exploration方法
2. 用户可以通过更改slam_methods:=xxxx选择不同的方法默认方法为gmapping
3. slam_methods包括gmapping, hector, karto, frontier_exploration
4. 举个例子如果想使用Karto Slam可以用如下指令
roslaunch spark_slam 2d_slam.launch slam_methods:=karto
```
```yaml
Note: 安装对应的package或下载对应源码
For Gmapping:
Gmapping包在install.sh中已经安装
For Hector Mapping:
sudo apt-get install ros-indigo-hector-mapping
For Frontier Exploration:
sudo apt-get install ros-indigo-frontier-exploration ros-indigo-navigation-stage
For Karto Mapping:
sudo apt-get install ros-indigo-slam-karto
```
## Spark-自动导航
### 说明
* 在构建好周围环境地图的前提下Spark可以在地图范围内实现自动导航并且能够规避动态和静态的障碍物
### 步骤
* 确保Spark底盘、摄像头或雷达与主机连接良好
* 进行工作空间的配置
```yaml
cd ~/spark_ws
source devel/setup.bash
```
* 主机,启动导航文件
```yaml
基于2d lidar:
roslaunch spark_navigation amcl_demo_lidar.launch map_file:=home/username/my_map.yaml (username为主机用户名, map_file后面为地图的yaml文件所在位置, 可根据地图保存的地址进行更改)
基于camera:
roslaunch spark_navigation amcl_demo.launch map_file:=home/username/my_map.yaml
如果你看到 odom received! 说明已经正常运行。
```
* 在Rviz中选择“2D Pose Estimate”估计Spark大概的位置。按住鼠标左键估计确定Spark大概朝向。
* 设置好估计的姿态选择“2D Nav Goal”点击你想让Spark去的地方以及朝向。
## RTABMap-深度相机手持建图
### 说明
* RTAB-MapReal-Time Appearance-Based Mapping
* RTAB-Map是基于具有实时约束的全局闭环检测的RGB-D SLAM方法
* 可以生成环境的3D点云
### 步骤
* 确保Spark底盘、摄像头与主机连接良好
* 进行工作空间的配置
* cd ~/spark_ws
```yaml
source devel/setup.bash
```
* 主机,启动深度相机
```yaml
roslaunch spark_rtabmap camera.launch
```
* 主机,新终端,启动建图模式
```yaml
roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start"
P.S. "--delete_db_on_start"用于清除旧数据库
```
* 缓慢移动camera进行建图
### 补充
```yaml
Note: 在实验前要安装rtabmap package
sudo apt-get install ros-indigo-rtabmap-ros
```
## Spark-RTABMap建图与导航
### 说明
* 利用rtabmap-ros包在spark上实现建图与导航
* 可生成3D点云以及2D占据栅格地图
### 步骤
* 确保Spark底盘、摄像头与主机连接良好
* 进行工作空间的配置
```yaml
cd ~/spark_ws
source devel/setup.bash
```
* 主机启动spark以及rtabmap相关节点
```yaml
roslaunch spark_rtabmap rtab_demo.launch
```
* 主机,新终端,启动建图
```yaml
roslaunch spark_rtabmap mapping.launch
P.S. 这一步若提示在rtabmap_ros找不到rtabmap
source /opt/ros/indigo/setup.bash
```
* 主机,新终端,启动键盘控制
```yaml
rosrun spark_teleop spark_teleop_node 0.25 0.5
0.25为线速度0.5为角速度用户可通过WASD控制Spark移动
```
* 主机,新终端,启动定位导航模式
```yaml
roslaunch spark_rtabmap mapping.launch localization:=true
```
* 在Rviz中选择“2D Pose Estimate”估计Spark大概的位置。按住鼠标左键估计确定Spark大概朝向。
* 设置好估计的姿态选择“2D Nav Goal”点击你想让Spark去的地方以及朝向。
### 补充
```yaml
Note: 在实验前要安装rtabmap package
sudo apt-get install ros-indigo-rtabmap-ros
地图数据保存在 ~/.ros/rtabmap.db中
```

Binary file not shown.

11
doc/install.sh Executable file
View File

@ -0,0 +1,11 @@
echo 'Spark driver is installing'
echo 'Setting udev rules'
BASEPATH=$(cd `dirname $0`; pwd)
sudo cp $BASEPATH/rules/3ilidar-usb-serial.rules /etc/udev/rules.d/
sudo cp $BASEPATH/rules/spark-usb-serial.rules /etc/udev/rules.d/
sudo cp $BASEPATH/rules/orbbec-usb.rules /etc/udev/rules.d/556-orbbec-usb.rules
sudo udevadm trigger
echo 'Spark driver is installed'

18
doc/readme_dev Executable file
View File

@ -0,0 +1,18 @@
1.工作空间编译方式:
a.catkin编译:
进入spark工作空间执行那个命令"catkin_make"
b.eclipse编译
进入spark工作空间执行那个命令"./eclipse.sh"
2.随行APP运行
请参考 /spark/src/spark_app/spark_follower/doc的readme说明。
3.键盘控制
请参考 /spark/src/spark/spark_teleop节点
4.深度摄像头
a.使用方法请参考~/spark/src/spark_app/spark_follower节点
b.深度摄像头的驱动OpenNI2在tool目录下

21
doc/readme_install Executable file
View File

@ -0,0 +1,21 @@
1.安装:
a.准备环境
操作系统ubantu14.04
ROS版本indigo
b.解压目录:
spar.rar.gz到home目录下解压后目录为~/spark/install:
c.设置:
终端执行以下命令:
cd /spark/install/doc
sudo ./install.sh
2.随行APP运行
cd ~spark/install
./follow_run.sh
3.键盘控制
source ./setup.bash
rosrun teleop teleop 200 2

View File

@ -0,0 +1 @@
SUBSYSTEM=="tty", ATTRS{idVendor}=="10c4", ATTRS{idProduct}=="ea60", SYMLINK+="3ilidar", MODE:="0666",OWNER:="root"

11
doc/rules/orbbec-usb.rules Executable file
View File

@ -0,0 +1,11 @@
SUBSYSTEM=="usb", ATTR{idProduct}=="0400", ATTR{idVendor}=="2bc5", MODE:="0666", OWNER:="root", GROUP:="video"
SUBSYSTEM=="usb", ATTR{idProduct}=="0401", ATTR{idVendor}=="2bc5", MODE:="0666", OWNER:="root", GROUP:="video"
SUBSYSTEM=="usb", ATTR{idProduct}=="0402", ATTR{idVendor}=="2bc5", MODE:="0666", OWNER:="root", GROUP:="video"
SUBSYSTEM=="usb", ATTR{idProduct}=="0403", ATTR{idVendor}=="2bc5", MODE:="0666", OWNER:="root", GROUP:="video"
SUBSYSTEM=="usb", ATTR{idProduct}=="0404", ATTR{idVendor}=="2bc5", MODE:="0666", OWNER:="root", GROUP:="video"
SUBSYSTEM=="usb", ATTR{idProduct}=="0405", ATTR{idVendor}=="2bc5", MODE:="0666", OWNER:="root", GROUP:="video"
SUBSYSTEM=="usb", ATTR{idProduct}=="0406", ATTR{idVendor}=="2bc5", MODE:="0666", OWNER:="root", GROUP:="video"
SUBSYSTEM=="usb", ATTR{idProduct}=="0407", ATTR{idVendor}=="2bc5", MODE:="0666", OWNER:="root", GROUP:="video"
SUBSYSTEM=="usb", ATTR{idProduct}=="0408", ATTR{idVendor}=="2bc5", MODE:="0666", OWNER:="root", GROUP:="video"
SUBSYSTEM=="usb", ATTR{idProduct}=="0409", ATTR{idVendor}=="2bc5", MODE:="0666", OWNER:="root", GROUP:="video"
SUBSYSTEM=="usb", ATTR{idProduct}=="040a", ATTR{idVendor}=="2bc5", MODE:="0666", OWNER:="root", GROUP:="video"

View File

@ -0,0 +1,2 @@
SUBSYSTEM=="tty", ATTRS{idVendor}=="1a86", ATTRS{idProduct}=="7523", SYMLINK+="sparkBase", MODE:="0666",OWNER:="root"

View File

@ -0,0 +1 @@
SUBSYSTEM=="tty", ATTRS{idVendor}=="2341", ATTRS{idProduct}=="0042", SYMLINK+="uarm", MODE:="0666",OWNER:="root"

BIN
doc/标定棋盘9x7 20x20mm.pdf Executable file

Binary file not shown.

1011
onekey.sh Executable file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,38 @@
cmake_minimum_required(VERSION 2.8.3)
project(app_shell)
# Load catkin and all dependencies required for this package
find_package(catkin REQUIRED COMPONENTS roscpp sensor_msgs)
# What other packages will need to use this package
catkin_package(
CATKIN_DEPENDS roscpp sensor_msgs
)
###########
## Build ##
###########
include_directories(${catkin_INCLUDE_DIRS})
# Add_executables
#add_executable(laser_footprint_filter src/laser_footprint_filter.cpp)
#target_link_libraries(laser_footprint_filter ${catkin_LIBRARIES})
#############
## Install ##
#############
# Mark executables and/or libraries for installation
#install(TARGETS laser_footprint_filter
# LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
# RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
#)
# Mark anything (useful) else for installation
install(DIRECTORY launch
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
)

View File

@ -0,0 +1,17 @@
<!--
The nxrobo intel movidius.
-->
<launch>
<include file="$(find spark_teleop)/launch/teleop.launch"/>
<arg name="cnn_model" default="GoogleNet" doc="cnn_model [GoogleNet, AlexNet, SqueezeNet, Inception_v1, Inception_v2, Inception_v3, Inception_v4, MobileNet]"/>
<include file="$(find movidius_ncs_launch)/launch/ncs_camera.launch">
<arg name="cnn_type" value="$(arg cnn_model)"/>
</include>
<include file="$(find movidius_ncs_launch)/launch/ncs_stream_classification_example.launch">
<arg name="camera_topic" value="/camera/rgb/image_raw" />
</include>
</launch>

View File

@ -0,0 +1,16 @@
<!--
The nxrobo intel movidius.
-->
<launch>
<include file="$(find spark_teleop)/launch/teleop.launch"/>
<arg name="cnn_model" default="GoogleNet" doc="cnn_model [MobileNetSSD, TinyYolo_v1]"/>
<include file="$(find movidius_ncs_launch)/launch/ncs_camera.launch">
<arg name="cnn_type" value="$(arg cnn_model)"/>
</include>
<include file="$(find movidius_ncs_launch)/launch/ncs_stream_detection_example.launch">
<arg name="camera_topic" value="/camera/rgb/image_raw" />
</include>
</launch>

View File

@ -0,0 +1,29 @@
<?xml version="1.0"?>
<package>
<name>app_shell</name>
<version>2.3.7</version>
<description>spark_navigation</description>
<maintainer email="litian.zhuang@nxrobo.com">litian.zhuang</maintainer>
<license>BSD</license>
<url type="website">http://wiki.ros.org/Robots/Spark</url>
<author>yutong.xie</author>
<buildtool_depend>catkin</buildtool_depend>
<build_depend>tf</build_depend>
<build_depend>roscpp</build_depend>
<build_depend>sensor_msgs</build_depend>
<run_depend>tf</run_depend>
<run_depend>roscpp</run_depend>
<run_depend>sensor_msgs</run_depend>
<run_depend>move_base</run_depend>
<run_depend>map_server</run_depend>
<run_depend>amcl</run_depend>
<run_depend>gmapping</run_depend>
<run_depend>dwa_local_planner</run_depend>
<export>
</export>
</package>

3
src/3rd_app/darknet_ros/.gitmodules vendored Executable file
View File

@ -0,0 +1,3 @@
[submodule "darknet"]
path = darknet
url = https://github.com/kunaltyagi/darknet

24
src/3rd_app/darknet_ros/LICENSE Executable file
View File

@ -0,0 +1,24 @@
Copyright (c) 2017, Marko Bjelonic, Robotic Systems Lab, ETH Zurich
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

193
src/3rd_app/darknet_ros/README.md Executable file
View File

@ -0,0 +1,193 @@
# YOLO ROS: Real-Time Object Detection for ROS
## Overview
This is a ROS package developed for object detection in camera images. You only look once (YOLO) is a state-of-the-art, real-time object detection system. In the following ROS package you are able to use YOLO (V3) on GPU and CPU. The pre-trained model of the convolutional neural network is able to detect pre-trained classes including the data set from VOC and COCO, or you can also create a network with your own detection objects. For more information about YOLO, Darknet, available training data and training YOLO see the following link: [YOLO: Real-Time Object Detection](http://pjreddie.com/darknet/yolo/).
The YOLO packages have been tested under ROS Melodic and Ubuntu 18.04. This is research code, expect that it changes often and any fitness for a particular purpose is disclaimed.
**Author: [Marko Bjelonic](https://www.markobjelonic.com), marko.bjelonic@mavt.ethz.ch**
**Affiliation: [Robotic Systems Lab](http://www.rsl.ethz.ch/), ETH Zurich**
![Darknet Ros example: Detection image](darknet_ros/doc/test_detection.png)
![Darknet Ros example: Detection image](darknet_ros/doc/test_detection_anymal.png)
Based on the [Pascal VOC](https://pjreddie.com/projects/pascal-voc-dataset-mirror/) 2012 dataset, YOLO can detect the 20 Pascal object classes:
- person
- bird, cat, cow, dog, horse, sheep
- aeroplane, bicycle, boat, bus, car, motorbike, train
- bottle, chair, dining table, potted plant, sofa, tv/monitor
Based on the [COCO](http://cocodataset.org/#home) dataset, YOLO can detect the 80 COCO object classes:
- person
- bicycle, car, motorbike, aeroplane, bus, train, truck, boat
- traffic light, fire hydrant, stop sign, parking meter, bench
- cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe
- backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket
- bottle, wine glass, cup, fork, knife, spoon, bowl
- banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake
- chair, sofa, pottedplant, bed, diningtable, toilet, tvmonitor, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush
## Citing
The YOLO methods used in this software are described in the paper: [You Only Look Once: Unified, Real-Time Object Detection](https://arxiv.org/abs/1506.02640).
If you are using YOLO V3 for ROS, please add the following citation to your publication:
M. Bjelonic
**"YOLO ROS: Real-Time Object Detection for ROS"**,
URL: https://github.com/leggedrobotics/darknet_ros, 2018.
@misc{bjelonicYolo2018,
author = {Marko Bjelonic},
title = {{YOLO ROS}: Real-Time Object Detection for {ROS}},
howpublished = {\url{https://github.com/leggedrobotics/darknet_ros}},
year = {2016--2018},
}
## Installation
### Dependencies
This software is built on the Robotic Operating System ([ROS]), which needs to be [installed](http://wiki.ros.org) first. Additionally, YOLO for ROS depends on following software:
- [OpenCV](http://opencv.org/) (computer vision library),
- [boost](http://www.boost.org/) (c++ library),
### Building
[![Build Status](https://ci.leggedrobotics.com/buildStatus/icon?job=github_leggedrobotics/darknet_ros/master)](https://ci.leggedrobotics.com/job/github_leggedrobotics/job/darknet_ros/job/master/)
In order to install darknet_ros, clone the latest version using SSH (see [how to set up an SSH key](https://confluence.atlassian.com/bitbucket/set-up-an-ssh-key-728138079.html)) from this repository into your catkin workspace and compile the package using ROS.
cd catkin_workspace/src
git clone --recursive git@github.com:leggedrobotics/darknet_ros.git
cd ../
To maximize performance, make sure to build in *Release* mode. You can specify the build type by setting
catkin_make -DCMAKE_BUILD_TYPE=Release
or using the [Catkin Command Line Tools](http://catkin-tools.readthedocs.io/en/latest/index.html#)
catkin build darknet_ros -DCMAKE_BUILD_TYPE=Release
Darknet on the CPU is fast (approximately 1.5 seconds on an Intel Core i7-6700HQ CPU @ 2.60GHz × 8) but it's like 500 times faster on GPU! You'll have to have an Nvidia GPU and you'll have to install CUDA. The CMakeLists.txt file automatically detects if you have CUDA installed or not. CUDA is a parallel computing platform and application programming interface (API) model created by Nvidia. If you do not have CUDA on your System the build process will switch to the CPU version of YOLO. If you are compiling with CUDA, you might receive the following build error:
nvcc fatal : Unsupported gpu architecture 'compute_61'.
This means that you need to check the compute capability (version) of your GPU. You can find a list of supported GPUs in CUDA here: [CUDA - WIKIPEDIA](https://en.wikipedia.org/wiki/CUDA#Supported_GPUs). Simply find the compute capability of your GPU and add it into darknet_ros/CMakeLists.txt. Simply add a similar line like
-O3 -gencode arch=compute_62,code=sm_62
### Download weights
The yolo-voc.weights and tiny-yolo-voc.weights are downloaded automatically in the CMakeLists.txt file. If you need to download them again, go into the weights folder and download the two pre-trained weights from the COCO data set:
cd catkin_workspace/src/darknet_ros/darknet_ros/yolo_network_config/weights/
wget http://pjreddie.com/media/files/yolov2.weights
wget http://pjreddie.com/media/files/yolov2-tiny.weights
And weights from the VOC data set can be found here:
wget http://pjreddie.com/media/files/yolov2-voc.weights
wget http://pjreddie.com/media/files/yolov2-tiny-voc.weights
And the pre-trained weight from YOLO v3 can be found here:
wget http://pjreddie.com/media/files/yolov3-voc.weights
wget http://pjreddie.com/media/files/yolov3.weights
### Use your own detection objects
In order to use your own detection objects you need to provide your weights and your cfg file inside the directories:
catkin_workspace/src/darknet_ros/darknet_ros/yolo_network_config/weights/
catkin_workspace/src/darknet_ros/darknet_ros/yolo_network_config/cfg/
In addition, you need to create your config file for ROS where you define the names of the detection objects. You need to include it inside:
catkin_workspace/src/darknet_ros/darknet_ros/config/
Then in the launch file you have to point to your new config file in the line:
<rosparam command="load" ns="darknet_ros" file="$(find darknet_ros)/config/your_config_file.yaml"/>
### Unit Tests
Run the unit tests using the [Catkin Command Line Tools](http://catkin-tools.readthedocs.io/en/latest/index.html#)
catkin build darknet_ros --no-deps --verbose --catkin-make-args run_tests
You will see the image above popping up.
## Basic Usage
In order to get YOLO ROS: Real-Time Object Detection for ROS to run with your robot, you will need to adapt a few parameters. It is the easiest if duplicate and adapt all the parameter files that you need to change from the `darkned_ros` package. These are specifically the parameter files in `config` and the launch file from the `launch` folder.
## Nodes
### Node: darknet_ros
This is the main YOLO ROS: Real-Time Object Detection for ROS node. It uses the camera measurements to detect pre-learned objects in the frames.
### ROS related parameters
You can change the names and other parameters of the publishers, subscribers and actions inside `darkned_ros/config/ros.yaml`.
#### Subscribed Topics
* **`/camera_reading`** ([sensor_msgs/Image])
The camera measurements.
#### Published Topics
* **`object_detector`** ([std_msgs::Int8])
Publishes the number of detected objects.
* **`bounding_boxes`** ([darknet_ros_msgs::BoundingBoxes])
Publishes an array of bounding boxes that gives information of the position and size of the bounding box in pixel coordinates.
* **`detection_image`** ([sensor_msgs::Image])
Publishes an image of the detection image including the bounding boxes.
#### Actions
* **`camera_reading`** ([sensor_msgs::Image])
Sends an action with an image and the result is an array of bounding boxes.
### Detection related parameters
You can change the parameters that are related to the detection by adding a new config file that looks similar to `darkned_ros/config/yolo.yaml`.
* **`image_view/enable_opencv`** (bool)
Enable or disable the open cv view of the detection image including the bounding boxes.
* **`image_view/wait_key_delay`** (int)
Wait key delay in ms of the open cv window.
* **`yolo_model/config_file/name`** (string)
Name of the cfg file of the network that is used for detection. The code searches for this name inside `darkned_ros/yolo_network_config/cfg/`.
* **`yolo_model/weight_file/name`** (string)
Name of the weights file of the network that is used for detection. The code searches for this name inside `darkned_ros/yolo_network_config/weights/`.
* **`yolo_model/threshold/value`** (float)
Threshold of the detection algorithm. It is defined between 0 and 1.
* **`yolo_model/detection_classes/names`** (array of strings)
Detection names of the network used by the cfg and weights file inside `darkned_ros/yolo_network_config/`.

27
src/3rd_app/darknet_ros/darknet/.gitignore vendored Executable file
View File

@ -0,0 +1,27 @@
*.o
*.dSYM
*.csv
*.out
*.png
*.jpg
*.pyc
old/
mnist/
data/
caffe/
grasp/
images/
opencv/
convnet/
decaf/
submission/
cfg/
darknet
.fuse*
# OS Generated #
.DS_Store*
ehthumbs.db
Icon?
Thumbs.db
*.swp

View File

@ -0,0 +1,12 @@
YOLO LICENSE
Version 2, July 29 2016
THIS SOFTWARE LICENSE IS PROVIDED "ALL CAPS" SO THAT YOU KNOW IT IS SUPER
SERIOUS AND YOU DON'T MESS AROUND WITH COPYRIGHT LAW BECAUSE YOU WILL GET IN
TROUBLE HERE ARE SOME OTHER BUZZWORDS COMMONLY IN THESE THINGS WARRANTIES
LIABILITY CONTRACT TORT LIABLE CLAIMS RESTRICTION MERCHANTABILITY. NOW HERE'S
THE REAL LICENSE:
0. Darknet is public domain.
1. Do whatever you want with it.
2. Stop emailing me about it!

View File

@ -0,0 +1,13 @@
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
Version 2, December 2004
Copyright (C) 2004 Sam Hocevar <sam@hocevar.net>
Everyone is permitted to copy and distribute verbatim or modified
copies of this license document, and changing it is allowed as long
as the name is changed.
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. You just DO WHAT THE FUCK YOU WANT TO.

View File

@ -0,0 +1,91 @@
RNN LICENSE Version 3, June 21 2017
Copyright (c) 1990, 1989, 1999 Free87337 May 48 THIRD PARTIES OR ANY OTHER THE
COMPLAIN OR CONSEQUENTIAL DAMAGES AND REGARDLESS OF WHETHER IN CONTRACT, TO THE
EXTENT REPAIR OR AGENTS (NOT THE IN ANY EVENT). THE SOFTWARE WILL BE
UNINTERRUPTED OR ERROR-FREE OR ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF ALL THE WORK (GOVERNED CODE) HIM RESPONSES, OR OF FINES,
SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR ANY OTHER OR OTHER HARL UNDER NO
CIRCUMSTANCES AND UNDER NO LEGAL THEORY, WHETHER TORT (INCLUDING NEGLIGENCE),
PATENT PERMITTED BY THE INSTAGRAM PARENT STATE OR TORT (INCLUDING NEGLIGENCE),
PRODUCT LIABILITY OR OTHERWISE, ARISING OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR ANYTHING PROVIDED IN THIS PRODUCT, COMMIS AND SERVICES
ARE LICENSED SOFTWARE AND ANY RESULE OR ANY OTHER THE COPYRIGHT HOLDERS BE
LIABLE FOR ANY SPECIAL, INCIDENTAL, CASE, SUCH WARRANTIES, EXPRESS OR IMPLIED,
INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT THE COPYRIGHT HOLDERS AND/OR ANY
PERSON FOR ANY INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES OF ANY
EXPRESS OR DISTRIBUTE THAT ALL CLAIMS ARE SHALL CREATE DERAVE BE LIABLE TO YOU
WILL HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
6\. TERMINATION. TO THE EXTENT PERMITTED BY LAW, NO USE OF THE COVERED CODE IS
WITH YOU. SHOULD ANY COVERED CODE PROVE DEFECTIVE IN ANY RESPECT, YOU (NOT THE
INITIAL DEVELOPER OR ANY OTHER CONTRIBUTOR) ASSUME THE COST OF ANY NECESSARY
SERVICING, REPAIR OR COULT OR IN ANY WAY OUT OF THE USE OF THE WEBSITES OR
SERVICE WILL BE CONSEQUENTIAL DAMAGES OF ANY KIND HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
This paragraph Agreement constitutes the entire agreement between the parties
with respect to the Work licensed here. However, if you place the name of the
fact that the arbitration was the consultation of the parties as a "patent is".
Subject to the terms and conditions of this License, Contributor has knowledge
that a license under a third party may also be used to endorse or promote
products derived from the Work, and there is no warranty on the Software and
Science Fees. For the purposes of this Agreement, attach the following
disclaimers (without liabilities of written notice to the Subject Software) in a
manner that a product is under common control with you. The Free Software
Foundation may publish revised and/or new versions of the License for the
Modifications made by the applicable terms. The Recipient shall promptly retain
the covered works for any reason be entered in any federal or state or login
Restricted Laws appearing in the United States or any of its own information
that is not disabled from a derivative work except as expressly permitted in
this License, to the extent that they are in receiving the Software and Source
Code or any exercise of the rights granted to You by this License or a
Contributor made by the Licensor or are authorized to make a reasonable
retirement by the courts of the courts located in Santa Clara County, California
printed and related to the Work or “Company” and Apache Software Foundation. If
the Licensor shall be entitled to reflect your rights to use the Software and
the Software to exercise the rights granted to the recipient without a
requirement to exercise the rights granted by the Agreement to the provision
will begin will appear in such cases, you will use such information without such
corporation shall be an officer with respect to any part of the Software or any
portion thereof. Capitalized terms are included in the Initial Contributor and
under no circumstances will license the Service at any time and for any direct,
indirect, special, incidental, or consequential damages of or assist in
connection with any Services or the registration purposes only to the extent
that it includes any or all means including the processing of which you download
any derivative work. Any of the purchases transmission purposes are made
available, if any, in other circumstances, we may review the copyright notice.
In the event that this Agreement is required to give us strict content. The
inclusion of the other party hereunder may also notify you Intellectual Property
Rights to any third party. This means that the Source Code exists of the Work
will not charge a program available to you at any time. You must include a
prominent statement that the Software is governed under a particular version of
this Agreement. You must include a provision to the extent that there is no
warranty for the content of others. You agree that the Recipient was appointed
as a Contributor, (c) are effective until terminated by hereunder, then the
registration are not disabled and not limited to, submit any Customer Data
without the updated use of the Software and that no fee is released. You grant
to Use Other Arbitration Rules for Diagnostic or Services may use or modify the
Apple Software and Consolidated Apple Software or Services. The Company may have
full risk as a product of the Compatible Source. A Contribution by the Licensor
or by the updated Software under the following conditions we can redistribute
any General Provision of this Agreement. If the Program is used in accordance
with the terms of this Agreement, Customer may provide advertisements from your
devices that clause you can your employer or a transaction or country that has
been controlled by the arbitrator, that they will be useful of this Agreement.
The term "Open Source Software is available in connection with the program, and
you may not protect the combination of the Covered Code. You should like to
select a user's rights to charge a copy of this License. I are Contributor's
confidentiality of the exercise of the rights granted herein. Such a covered
work is released as a consequence, the Licensor shall be eligible for a purpose
or subcontractor of the person or entity to the user of the user, then the word
"Application" means having the original fee for any reason; and that no patent
license to more than fifty stated close of the license term. The terms of this
License will the license terms and conditions set forth in Section 2.2 (OPEC)
and You will not use the Software or any set of responsibility for any resulting
information that the Original Code warrants that you have the right to disclose
these information (or in the notification; or (iii) late use of the software or
any third party to the three (50) days before such belief to the extent that it
includes a court court obtains the rights granted by this License.

View File

@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
{one line to give the program's name and a brief idea of what it does.}
Copyright (C) {year} {name of author}
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
{project} Copyright (C) {year} {fullname}
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

View File

@ -0,0 +1,8 @@
META-LICENSE
Version 1, June 21 2017
Any and all licenses may be applied to the software either individually
or in concert. Any issues, ambiguities, paradoxes, or metaphysical quandries
arising from this combination should be discussed with a local faith leader,
hermit, or guru. The Oxford comma shall be used.

View File

@ -0,0 +1,22 @@
MIT License
Copyright (c) 2017 Joseph Redmon
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -0,0 +1,13 @@
YOLO LICENSE
Version 1, July 10 2015
THIS SOFTWARE LICENSE IS PROVIDED "ALL CAPS" SO THAT YOU KNOW IT IS SUPER
SERIOUS AND YOU DON'T MESS AROUND WITH COPYRIGHT LAW BECAUSE YOU WILL GET IN
TROUBLE HERE ARE SOME OTHER BUZZWORDS COMMONLY IN THESE THINGS WARRANTIES
LIABILITY CONTRACT TORT LIABLE CLAIMS RESTRICTION MERCHANTABILITY SUBJECT TO
THE FOLLOWING CONDITIONS:
1. #yolo
2. #swag
3. #blazeit

View File

@ -0,0 +1,105 @@
GPU=0
CUDNN=0
OPENCV=0
OPENMP=0
DEBUG=0
ARCH= -gencode arch=compute_30,code=sm_30 \
-gencode arch=compute_35,code=sm_35 \
-gencode arch=compute_50,code=[sm_50,compute_50] \
-gencode arch=compute_52,code=[sm_52,compute_52]
# -gencode arch=compute_20,code=[sm_20,sm_21] \ This one is deprecated?
# This is what I use, uncomment if you know your arch and want to specify
# ARCH= -gencode arch=compute_52,code=compute_52
VPATH=./src/:./examples
SLIB=libdarknet.so
ALIB=libdarknet.a
EXEC=darknet
OBJDIR=./obj/
CC=gcc
CPP=g++
NVCC=nvcc
AR=ar
ARFLAGS=rcs
OPTS=-Ofast
LDFLAGS= -lm -pthread
COMMON= -Iinclude/ -Isrc/
CFLAGS=-Wall -Wno-unused-result -Wno-unknown-pragmas -fPIC -fpermissive
ifeq ($(OPENMP), 1)
CFLAGS+= -fopenmp
endif
ifeq ($(DEBUG), 1)
OPTS=-O0 -g
endif
CFLAGS+=$(OPTS)
ifeq ($(OPENCV), 1)
COMMON+= -DOPENCV
CFLAGS+= -DOPENCV
LDFLAGS+= `pkg-config --libs opencv` -lstdc++
COMMON+= `pkg-config --cflags opencv`
endif
ifeq ($(GPU), 1)
COMMON+= -DGPU -I/usr/local/cuda/include/
CFLAGS+= -DGPU
LDFLAGS+= -L/usr/local/cuda/lib64 -lcuda -lcudart -lcublas -lcurand
endif
ifeq ($(CUDNN), 1)
COMMON+= -DCUDNN
CFLAGS+= -DCUDNN
LDFLAGS+= -lcudnn
endif
OBJ=gemm.o utils.o cuda.o deconvolutional_layer.o convolutional_layer.o list.o image.o activations.o im2col.o col2im.o blas.o crop_layer.o dropout_layer.o maxpool_layer.o softmax_layer.o data.o matrix.o network.o connected_layer.o cost_layer.o parser.o option_list.o detection_layer.o route_layer.o upsample_layer.o box.o normalization_layer.o avgpool_layer.o layer.o local_layer.o shortcut_layer.o logistic_layer.o activation_layer.o rnn_layer.o gru_layer.o crnn_layer.o demo.o batchnorm_layer.o region_layer.o reorg_layer.o tree.o lstm_layer.o l2norm_layer.o yolo_layer.o iseg_layer.o image_opencv.o
EXECOBJA=captcha.o lsd.o super.o art.o tag.o cifar.o go.o rnn.o segmenter.o regressor.o classifier.o coco.o yolo.o detector.o nightmare.o instance-segmenter.o darknet.o
ifeq ($(GPU), 1)
LDFLAGS+= -lstdc++
OBJ+=convolutional_kernels.o deconvolutional_kernels.o activation_kernels.o im2col_kernels.o col2im_kernels.o blas_kernels.o crop_layer_kernels.o dropout_layer_kernels.o maxpool_layer_kernels.o avgpool_layer_kernels.o
endif
EXECOBJ = $(addprefix $(OBJDIR), $(EXECOBJA))
OBJS = $(addprefix $(OBJDIR), $(OBJ))
DEPS = $(wildcard src/*.h) Makefile include/darknet.h
all: obj backup results $(SLIB) $(ALIB) $(EXEC)
#all: obj results $(SLIB) $(ALIB) $(EXEC)
$(EXEC): $(EXECOBJ) $(ALIB)
$(CPP) $(COMMON) $(CFLAGS) $^ -o $@ $(LDFLAGS) $(ALIB)
$(ALIB): $(OBJS)
$(AR) $(ARFLAGS) $@ $^
$(SLIB): $(OBJS)
$(CPP) $(CFLAGS) -shared $^ -o $@ $(LDFLAGS)
$(OBJDIR)%.o: %.cpp $(DEPS)
$(CPP) $(COMMON) $(CFLAGS) -c $< -o $@
$(OBJDIR)%.o: %.c $(DEPS)
$(CPP) $(COMMON) $(CFLAGS) -c $< -o $@
$(OBJDIR)%.o: %.cu $(DEPS)
$(NVCC) $(ARCH) $(COMMON) --compiler-options "$(CFLAGS)" -c $< -o $@
obj:
mkdir -p obj
backup:
mkdir -p backup
results:
mkdir -p results
.PHONY: clean
clean:
rm -rf $(OBJS) $(SLIB) $(ALIB) $(EXEC) $(EXECOBJ) $(OBJDIR)/*

View File

@ -0,0 +1,8 @@
![Darknet Logo](http://pjreddie.com/media/files/darknet-black-small.png)
# Darknet #
Darknet is an open source neural network framework written in C and CUDA. It is fast, easy to install, and supports CPU and GPU computation.
For more information see the [Darknet project website](http://pjreddie.com/darknet).
For questions or issues please use the [Google Group](https://groups.google.com/forum/#!forum/darknet).

View File

@ -0,0 +1,59 @@
#include "darknet.h"
#include <sys/time.h>
void demo_art(char *cfgfile, char *weightfile, int cam_index)
{
#ifdef OPENCV
network *net = load_network(cfgfile, weightfile, 0);
set_batch_network(net, 1);
srand(2222222);
void * cap = open_video_stream(0, cam_index, 0,0,0);
char *window = "ArtJudgementBot9000!!!";
if(!cap) error("Couldn't connect to webcam.\n");
int i;
int idx[] = {37, 401, 434};
int n = sizeof(idx)/sizeof(idx[0]);
while(1){
image in = get_image_from_stream(cap);
image in_s = resize_image(in, net->w, net->h);
float *p = network_predict(net, in_s.data);
printf("\033[2J");
printf("\033[1;1H");
float score = 0;
for(i = 0; i < n; ++i){
float s = p[idx[i]];
if (s > score) score = s;
}
score = score;
printf("I APPRECIATE THIS ARTWORK: %10.7f%%\n", score*100);
printf("[");
int upper = 30;
for(i = 0; i < upper; ++i){
printf("%c", ((i+.5) < score*upper) ? 219 : ' ');
}
printf("]\n");
show_image(in, window, 1);
free_image(in_s);
free_image(in);
}
#endif
}
void run_art(int argc, char **argv)
{
int cam_index = find_int_arg(argc, argv, "-c", 0);
char *cfg = argv[2];
char *weights = argv[3];
demo_art(cfg, weights, cam_index);
}

View File

@ -0,0 +1,459 @@
#include "darknet.h"
#include <sys/time.h>
#include <assert.h>
void extend_data_truth(data *d, int n, float val)
{
int i, j;
for(i = 0; i < d->y.rows; ++i){
d->y.vals[i] = realloc(d->y.vals[i], (d->y.cols+n)*sizeof(float));
for(j = 0; j < n; ++j){
d->y.vals[i][d->y.cols + j] = val;
}
}
d->y.cols += n;
}
matrix network_loss_data(network *net, data test)
{
int i,b;
int k = 1;
matrix pred = make_matrix(test.X.rows, k);
float *X = calloc(net->batch*test.X.cols, sizeof(float));
float *y = calloc(net->batch*test.y.cols, sizeof(float));
for(i = 0; i < test.X.rows; i += net->batch){
for(b = 0; b < net->batch; ++b){
if(i+b == test.X.rows) break;
memcpy(X+b*test.X.cols, test.X.vals[i+b], test.X.cols*sizeof(float));
memcpy(y+b*test.y.cols, test.y.vals[i+b], test.y.cols*sizeof(float));
}
network orig = *net;
net->input = X;
net->truth = y;
net->train = 0;
net->delta = 0;
forward_network(net);
*net = orig;
float *delta = net->layers[net->n-1].output;
for(b = 0; b < net->batch; ++b){
if(i+b == test.X.rows) break;
int t = max_index(y + b*test.y.cols, 1000);
float err = sum_array(delta + b*net->outputs, net->outputs);
pred.vals[i+b][0] = -err;
//pred.vals[i+b][0] = 1-delta[b*net->outputs + t];
}
}
free(X);
free(y);
return pred;
}
void train_attention(char *datacfg, char *cfgfile, char *weightfile, int *gpus, int ngpus, int clear)
{
int i, j;
float avg_cls_loss = -1;
float avg_att_loss = -1;
char *base = basecfg(cfgfile);
printf("%s\n", base);
printf("%d\n", ngpus);
network **nets = calloc(ngpus, sizeof(network*));
srand(time(0));
int seed = rand();
for(i = 0; i < ngpus; ++i){
srand(seed);
#ifdef GPU
cuda_set_device(gpus[i]);
#endif
nets[i] = load_network(cfgfile, weightfile, clear);
nets[i]->learning_rate *= ngpus;
}
srand(time(0));
network *net = nets[0];
int imgs = net->batch * net->subdivisions * ngpus;
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
list *options = read_data_cfg(datacfg);
char *backup_directory = option_find_str(options, "backup", "/backup/");
char *label_list = option_find_str(options, "labels", "data/labels.list");
char *train_list = option_find_str(options, "train", "data/train.list");
int classes = option_find_int(options, "classes", 2);
char **labels = get_labels(label_list);
list *plist = get_paths(train_list);
char **paths = (char **)list_to_array(plist);
printf("%d\n", plist->size);
int N = plist->size;
double time;
int divs=3;
int size=2;
load_args args = {0};
args.w = divs*net->w/size;
args.h = divs*net->h/size;
args.size = divs*net->w/size;
args.threads = 32;
args.hierarchy = net->hierarchy;
args.min = net->min_ratio*args.w;
args.max = net->max_ratio*args.w;
args.angle = net->angle;
args.aspect = net->aspect;
args.exposure = net->exposure;
args.saturation = net->saturation;
args.hue = net->hue;
args.paths = paths;
args.classes = classes;
args.n = imgs;
args.m = N;
args.labels = labels;
args.type = CLASSIFICATION_DATA;
data train;
data buffer;
pthread_t load_thread;
args.d = &buffer;
load_thread = load_data(args);
int epoch = (*net->seen)/N;
while(get_current_batch(net) < net->max_batches || net->max_batches == 0){
time = what_time_is_it_now();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data(args);
data resized = resize_data(train, net->w, net->h);
extend_data_truth(&resized, divs*divs, 0);
data *tiles = tile_data(train, divs, size);
printf("Loaded: %lf seconds\n", what_time_is_it_now()-time);
time = what_time_is_it_now();
float aloss = 0;
float closs = 0;
int z;
for (i = 0; i < divs*divs/ngpus; ++i) {
#pragma omp parallel for
for(j = 0; j < ngpus; ++j){
int index = i*ngpus + j;
extend_data_truth(tiles+index, divs*divs, SECRET_NUM);
matrix deltas = network_loss_data(nets[j], tiles[index]);
for(z = 0; z < resized.y.rows; ++z){
resized.y.vals[z][train.y.cols + index] = deltas.vals[z][0];
}
free_matrix(deltas);
}
}
int *inds = calloc(resized.y.rows, sizeof(int));
for(z = 0; z < resized.y.rows; ++z){
int index = max_index(resized.y.vals[z] + train.y.cols, divs*divs);
inds[z] = index;
for(i = 0; i < divs*divs; ++i){
resized.y.vals[z][train.y.cols + i] = (i == index)? 1 : 0;
}
}
data best = select_data(tiles, inds);
free(inds);
#ifdef GPU
if (ngpus == 1) {
closs = train_network(net, best);
} else {
closs = train_networks(nets, ngpus, best, 4);
}
#endif
for (i = 0; i < divs*divs; ++i) {
printf("%.2f ", resized.y.vals[0][train.y.cols + i]);
if((i+1)%divs == 0) printf("\n");
free_data(tiles[i]);
}
free_data(best);
printf("\n");
image im = float_to_image(64,64,3,resized.X.vals[0]);
//show_image(im, "orig");
//cvWaitKey(100);
/*
image im1 = float_to_image(64,64,3,tiles[i].X.vals[0]);
image im2 = float_to_image(64,64,3,resized.X.vals[0]);
show_image(im1, "tile");
show_image(im2, "res");
*/
#ifdef GPU
if (ngpus == 1) {
aloss = train_network(net, resized);
} else {
aloss = train_networks(nets, ngpus, resized, 4);
}
#endif
for(i = 0; i < divs*divs; ++i){
printf("%f ", nets[0]->output[1000 + i]);
if ((i+1) % divs == 0) printf("\n");
}
printf("\n");
free_data(resized);
free_data(train);
if(avg_cls_loss == -1) avg_cls_loss = closs;
if(avg_att_loss == -1) avg_att_loss = aloss;
avg_cls_loss = avg_cls_loss*.9 + closs*.1;
avg_att_loss = avg_att_loss*.9 + aloss*.1;
printf("%ld, %.3f: Att: %f, %f avg, Class: %f, %f avg, %f rate, %lf seconds, %ld images\n", get_current_batch(net), (float)(*net->seen)/N, aloss, avg_att_loss, closs, avg_cls_loss, get_current_rate(net), what_time_is_it_now()-time, *net->seen);
if(*net->seen/N > epoch){
epoch = *net->seen/N;
char buff[256];
sprintf(buff, "%s/%s_%d.weights",backup_directory,base, epoch);
save_weights(net, buff);
}
if(get_current_batch(net)%1000 == 0){
char buff[256];
sprintf(buff, "%s/%s.backup",backup_directory,base);
save_weights(net, buff);
}
}
char buff[256];
sprintf(buff, "%s/%s.weights", backup_directory, base);
save_weights(net, buff);
pthread_join(load_thread, 0);
free_network(net);
free_ptrs((void**)labels, classes);
free_ptrs((void**)paths, plist->size);
free_list(plist);
free(base);
}
void validate_attention_single(char *datacfg, char *filename, char *weightfile)
{
int i, j;
network *net = load_network(filename, weightfile, 0);
set_batch_network(net, 1);
srand(time(0));
list *options = read_data_cfg(datacfg);
char *label_list = option_find_str(options, "labels", "data/labels.list");
char *leaf_list = option_find_str(options, "leaves", 0);
if(leaf_list) change_leaves(net->hierarchy, leaf_list);
char *valid_list = option_find_str(options, "valid", "data/train.list");
int classes = option_find_int(options, "classes", 2);
int topk = option_find_int(options, "top", 1);
char **labels = get_labels(label_list);
list *plist = get_paths(valid_list);
char **paths = (char **)list_to_array(plist);
int m = plist->size;
free_list(plist);
float avg_acc = 0;
float avg_topk = 0;
int *indexes = calloc(topk, sizeof(int));
int divs = 4;
int size = 2;
int extra = 0;
float *avgs = calloc(classes, sizeof(float));
int *inds = calloc(divs*divs, sizeof(int));
for(i = 0; i < m; ++i){
int class_id = -1;
char *path = paths[i];
for(j = 0; j < classes; ++j){
if(strstr(path, labels[j])){
class_id = j;
break;
}
}
image im = load_image_color(paths[i], 0, 0);
image resized = resize_min(im, net->w*divs/size);
image crop = crop_image(resized, (resized.w - net->w*divs/size)/2, (resized.h - net->h*divs/size)/2, net->w*divs/size, net->h*divs/size);
image rcrop = resize_image(crop, net->w, net->h);
//show_image(im, "orig");
//show_image(crop, "cropped");
//cvWaitKey(0);
float *pred = network_predict(net, rcrop.data);
//pred[classes + 56] = 0;
for(j = 0; j < divs*divs; ++j){
printf("%.2f ", pred[classes + j]);
if((j+1)%divs == 0) printf("\n");
}
printf("\n");
copy_cpu(classes, pred, 1, avgs, 1);
top_k(pred + classes, divs*divs, divs*divs, inds);
show_image(crop, "crop");
for(j = 0; j < extra; ++j){
int index = inds[j];
int row = index / divs;
int col = index % divs;
int y = row * crop.h / divs - (net->h - crop.h/divs)/2;
int x = col * crop.w / divs - (net->w - crop.w/divs)/2;
printf("%d %d %d %d\n", row, col, y, x);
image tile = crop_image(crop, x, y, net->w, net->h);
float *pred = network_predict(net, tile.data);
axpy_cpu(classes, 1., pred, 1, avgs, 1);
show_image(tile, "tile");
//cvWaitKey(10);
}
if(net->hierarchy) hierarchy_predictions(pred, net->outputs, net->hierarchy, 1, 1);
if(rcrop.data != resized.data) free_image(rcrop);
if(resized.data != im.data) free_image(resized);
free_image(im);
free_image(crop);
top_k(pred, classes, topk, indexes);
if(indexes[0] == class_id) avg_acc += 1;
for(j = 0; j < topk; ++j){
if(indexes[j] == class_id) avg_topk += 1;
}
printf("%d: top 1: %f, top %d: %f\n", i, avg_acc/(i+1), topk, avg_topk/(i+1));
}
}
void validate_attention_multi(char *datacfg, char *filename, char *weightfile)
{
int i, j;
network *net = load_network(filename, weightfile, 0);
set_batch_network(net, 1);
srand(time(0));
list *options = read_data_cfg(datacfg);
char *label_list = option_find_str(options, "labels", "data/labels.list");
char *valid_list = option_find_str(options, "valid", "data/train.list");
int classes = option_find_int(options, "classes", 2);
int topk = option_find_int(options, "top", 1);
char **labels = get_labels(label_list);
list *plist = get_paths(valid_list);
int scales[] = {224, 288, 320, 352, 384};
int nscales = sizeof(scales)/sizeof(scales[0]);
char **paths = (char **)list_to_array(plist);
int m = plist->size;
free_list(plist);
float avg_acc = 0;
float avg_topk = 0;
int *indexes = calloc(topk, sizeof(int));
for(i = 0; i < m; ++i){
int class_id = -1;
char *path = paths[i];
for(j = 0; j < classes; ++j){
if(strstr(path, labels[j])){
class_id = j;
break;
}
}
float *pred = calloc(classes, sizeof(float));
image im = load_image_color(paths[i], 0, 0);
for(j = 0; j < nscales; ++j){
image r = resize_min(im, scales[j]);
resize_network(net, r.w, r.h);
float *p = network_predict(net, r.data);
if(net->hierarchy) hierarchy_predictions(p, net->outputs, net->hierarchy, 1 , 1);
axpy_cpu(classes, 1, p, 1, pred, 1);
flip_image(r);
p = network_predict(net, r.data);
axpy_cpu(classes, 1, p, 1, pred, 1);
if(r.data != im.data) free_image(r);
}
free_image(im);
top_k(pred, classes, topk, indexes);
free(pred);
if(indexes[0] == class_id) avg_acc += 1;
for(j = 0; j < topk; ++j){
if(indexes[j] == class_id) avg_topk += 1;
}
printf("%d: top 1: %f, top %d: %f\n", i, avg_acc/(i+1), topk, avg_topk/(i+1));
}
}
void predict_attention(char *datacfg, char *cfgfile, char *weightfile, char *filename, int top)
{
network *net = load_network(cfgfile, weightfile, 0);
set_batch_network(net, 1);
srand(2222222);
list *options = read_data_cfg(datacfg);
char *name_list = option_find_str(options, "names", 0);
if(!name_list) name_list = option_find_str(options, "labels", "data/labels.list");
if(top == 0) top = option_find_int(options, "top", 1);
int i = 0;
char **names = get_labels(name_list);
clock_t time;
int *indexes = calloc(top, sizeof(int));
char buff[256];
char *input = buff;
while(1){
if(filename){
strncpy(input, filename, 256);
}else{
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input, 0, 0);
image r = letterbox_image(im, net->w, net->h);
//resize_network(&net, r.w, r.h);
//printf("%d %d\n", r.w, r.h);
float *X = r.data;
time=clock();
float *predictions = network_predict(net, X);
if(net->hierarchy) hierarchy_predictions(predictions, net->outputs, net->hierarchy, 1, 1);
top_k(predictions, net->outputs, top, indexes);
fprintf(stderr, "%s: Predicted in %f seconds.\n", input, sec(clock()-time));
for(i = 0; i < top; ++i){
int index = indexes[i];
//if(net->hierarchy) printf("%d, %s: %f, parent: %s \n",index, names[index], predictions[index], (net->hierarchy->parent[index] >= 0) ? names[net->hierarchy->parent[index]] : "Root");
//else printf("%s: %f\n",names[index], predictions[index]);
printf("%5.2f%%: %s\n", predictions[index]*100, names[index]);
}
if(r.data != im.data) free_image(r);
free_image(im);
if (filename) break;
}
}
void run_attention(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *gpu_list = find_char_arg(argc, argv, "-gpus", 0);
int ngpus;
int *gpus = read_intlist(gpu_list, &ngpus, gpu_index);
int top = find_int_arg(argc, argv, "-t", 0);
int clear = find_arg(argc, argv, "-clear");
char *data = argv[3];
char *cfg = argv[4];
char *weights = (argc > 5) ? argv[5] : 0;
char *filename = (argc > 6) ? argv[6]: 0;
char *layer_s = (argc > 7) ? argv[7]: 0;
if(0==strcmp(argv[2], "predict")) predict_attention(data, cfg, weights, filename, top);
else if(0==strcmp(argv[2], "train")) train_attention(data, cfg, weights, gpus, ngpus, clear);
else if(0==strcmp(argv[2], "valid")) validate_attention_single(data, cfg, weights);
else if(0==strcmp(argv[2], "validmulti")) validate_attention_multi(data, cfg, weights);
}

View File

@ -0,0 +1,353 @@
#include "darknet.h"
void fix_data_captcha(data d, int mask)
{
matrix labels = d.y;
int i, j;
for(i = 0; i < d.y.rows; ++i){
for(j = 0; j < d.y.cols; j += 2){
if (mask){
if(!labels.vals[i][j]){
labels.vals[i][j] = SECRET_NUM;
labels.vals[i][j+1] = SECRET_NUM;
}else if(labels.vals[i][j+1]){
labels.vals[i][j] = 0;
}
} else{
if (labels.vals[i][j]) {
labels.vals[i][j+1] = 0;
} else {
labels.vals[i][j+1] = 1;
}
}
}
}
}
void train_captcha(char *cfgfile, char *weightfile)
{
srand(time(0));
float avg_loss = -1;
char *base = basecfg(cfgfile);
printf("%s\n", base);
network *net = load_network(cfgfile, weightfile, 0);
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
int imgs = 1024;
int i = *net->seen/imgs;
int solved = 1;
list *plist;
char **labels = get_labels("/data/captcha/reimgs.labels.list");
if (solved){
plist = get_paths("/data/captcha/reimgs.solved.list");
}else{
plist = get_paths("/data/captcha/reimgs.raw.list");
}
char **paths = (char **)list_to_array(plist);
printf("%d\n", plist->size);
clock_t time;
pthread_t load_thread;
data train;
data buffer;
load_args args = {0};
args.w = net->w;
args.h = net->h;
args.paths = paths;
args.classes = 26;
args.n = imgs;
args.m = plist->size;
args.labels = labels;
args.d = &buffer;
args.type = CLASSIFICATION_DATA;
load_thread = load_data_in_thread(args);
while(1){
++i;
time=clock();
pthread_join(load_thread, 0);
train = buffer;
fix_data_captcha(train, solved);
/*
image im = float_to_image(256, 256, 3, train.X.vals[114]);
show_image(im, "training");
cvWaitKey(0);
*/
load_thread = load_data_in_thread(args);
printf("Loaded: %lf seconds\n", sec(clock()-time));
time=clock();
float loss = train_network(net, train);
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%d: %f, %f avg, %lf seconds, %ld images\n", i, loss, avg_loss, sec(clock()-time), *net->seen);
free_data(train);
if(i%100==0){
char buff[256];
sprintf(buff, "/home/pjreddie/imagenet_backup/%s_%d.weights",base, i);
save_weights(net, buff);
}
}
}
void test_captcha(char *cfgfile, char *weightfile, char *filename)
{
network *net = load_network(cfgfile, weightfile, 0);
set_batch_network(net, 1);
srand(2222222);
int i = 0;
char **names = get_labels("/data/captcha/reimgs.labels.list");
char buff[256];
char *input = buff;
int indexes[26];
while(1){
if(filename){
strncpy(input, filename, 256);
}else{
//printf("Enter Image Path: ");
//fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input, net->w, net->h);
float *X = im.data;
float *predictions = network_predict(net, X);
top_predictions(net, 26, indexes);
//printf("%s: Predicted in %f seconds.\n", input, sec(clock()-time));
for(i = 0; i < 26; ++i){
int index = indexes[i];
if(i != 0) printf(", ");
printf("%s %f", names[index], predictions[index]);
}
printf("\n");
fflush(stdout);
free_image(im);
if (filename) break;
}
}
void valid_captcha(char *cfgfile, char *weightfile, char *filename)
{
char **labels = get_labels("/data/captcha/reimgs.labels.list");
network *net = load_network(cfgfile, weightfile, 0);
list *plist = get_paths("/data/captcha/reimgs.fg.list");
char **paths = (char **)list_to_array(plist);
int N = plist->size;
int outputs = net->outputs;
set_batch_network(net, 1);
srand(2222222);
int i, j;
for(i = 0; i < N; ++i){
if (i%100 == 0) fprintf(stderr, "%d\n", i);
image im = load_image_color(paths[i], net->w, net->h);
float *X = im.data;
float *predictions = network_predict(net, X);
//printf("%s: Predicted in %f seconds.\n", input, sec(clock()-time));
int truth = -1;
for(j = 0; j < 13; ++j){
if (strstr(paths[i], labels[j])) truth = j;
}
if (truth == -1){
fprintf(stderr, "bad: %s\n", paths[i]);
return;
}
printf("%d, ", truth);
for(j = 0; j < outputs; ++j){
if (j != 0) printf(", ");
printf("%f", predictions[j]);
}
printf("\n");
fflush(stdout);
free_image(im);
if (filename) break;
}
}
/*
void train_captcha(char *cfgfile, char *weightfile)
{
float avg_loss = -1;
srand(time(0));
char *base = basecfg(cfgfile);
printf("%s\n", base);
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
int imgs = 1024;
int i = net->seen/imgs;
list *plist = get_paths("/data/captcha/train.auto5");
char **paths = (char **)list_to_array(plist);
printf("%d\n", plist->size);
clock_t time;
while(1){
++i;
time=clock();
data train = load_data_captcha(paths, imgs, plist->size, 10, 200, 60);
translate_data_rows(train, -128);
scale_data_rows(train, 1./128);
printf("Loaded: %lf seconds\n", sec(clock()-time));
time=clock();
float loss = train_network(net, train);
net->seen += imgs;
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%d: %f, %f avg, %lf seconds, %d images\n", i, loss, avg_loss, sec(clock()-time), net->seen);
free_data(train);
if(i%10==0){
char buff[256];
sprintf(buff, "/home/pjreddie/imagenet_backup/%s_%d.weights",base, i);
save_weights(net, buff);
}
}
}
void decode_captcha(char *cfgfile, char *weightfile)
{
setbuf(stdout, NULL);
srand(time(0));
network net = parse_network_cfg(cfgfile);
set_batch_network(&net, 1);
if(weightfile){
load_weights(&net, weightfile);
}
char filename[256];
while(1){
printf("Enter filename: ");
fgets(filename, 256, stdin);
strtok(filename, "\n");
image im = load_image_color(filename, 300, 57);
scale_image(im, 1./255.);
float *X = im.data;
float *predictions = network_predict(net, X);
image out = float_to_image(300, 57, 1, predictions);
show_image(out, "decoded");
#ifdef OPENCV
cvWaitKey(0);
#endif
free_image(im);
}
}
void encode_captcha(char *cfgfile, char *weightfile)
{
float avg_loss = -1;
srand(time(0));
char *base = basecfg(cfgfile);
printf("%s\n", base);
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
int imgs = 1024;
int i = net->seen/imgs;
list *plist = get_paths("/data/captcha/encode.list");
char **paths = (char **)list_to_array(plist);
printf("%d\n", plist->size);
clock_t time;
while(1){
++i;
time=clock();
data train = load_data_captcha_encode(paths, imgs, plist->size, 300, 57);
scale_data_rows(train, 1./255);
printf("Loaded: %lf seconds\n", sec(clock()-time));
time=clock();
float loss = train_network(net, train);
net->seen += imgs;
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%d: %f, %f avg, %lf seconds, %d images\n", i, loss, avg_loss, sec(clock()-time), net->seen);
free_matrix(train.X);
if(i%100==0){
char buff[256];
sprintf(buff, "/home/pjreddie/imagenet_backup/%s_%d.weights",base, i);
save_weights(net, buff);
}
}
}
void validate_captcha(char *cfgfile, char *weightfile)
{
srand(time(0));
char *base = basecfg(cfgfile);
printf("%s\n", base);
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
int numchars = 37;
list *plist = get_paths("/data/captcha/solved.hard");
char **paths = (char **)list_to_array(plist);
int imgs = plist->size;
data valid = load_data_captcha(paths, imgs, 0, 10, 200, 60);
translate_data_rows(valid, -128);
scale_data_rows(valid, 1./128);
matrix pred = network_predict_data(net, valid);
int i, k;
int correct = 0;
int total = 0;
int accuracy = 0;
for(i = 0; i < imgs; ++i){
int allcorrect = 1;
for(k = 0; k < 10; ++k){
char truth = int_to_alphanum(max_index(valid.y.vals[i]+k*numchars, numchars));
char prediction = int_to_alphanum(max_index(pred.vals[i]+k*numchars, numchars));
if (truth != prediction) allcorrect=0;
if (truth != '.' && truth == prediction) ++correct;
if (truth != '.' || truth != prediction) ++total;
}
accuracy += allcorrect;
}
printf("Word Accuracy: %f, Char Accuracy %f\n", (float)accuracy/imgs, (float)correct/total);
free_data(valid);
}
void test_captcha(char *cfgfile, char *weightfile)
{
setbuf(stdout, NULL);
srand(time(0));
//char *base = basecfg(cfgfile);
//printf("%s\n", base);
network net = parse_network_cfg(cfgfile);
set_batch_network(&net, 1);
if(weightfile){
load_weights(&net, weightfile);
}
char filename[256];
while(1){
//printf("Enter filename: ");
fgets(filename, 256, stdin);
strtok(filename, "\n");
image im = load_image_color(filename, 200, 60);
translate_image(im, -128);
scale_image(im, 1/128.);
float *X = im.data;
float *predictions = network_predict(net, X);
print_letters(predictions, 10);
free_image(im);
}
}
*/
void run_captcha(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
char *filename = (argc > 5) ? argv[5]: 0;
if(0==strcmp(argv[2], "train")) train_captcha(cfg, weights);
else if(0==strcmp(argv[2], "test")) test_captcha(cfg, weights, filename);
else if(0==strcmp(argv[2], "valid")) valid_captcha(cfg, weights, filename);
//if(0==strcmp(argv[2], "test")) test_captcha(cfg, weights);
//else if(0==strcmp(argv[2], "encode")) encode_captcha(cfg, weights);
//else if(0==strcmp(argv[2], "decode")) decode_captcha(cfg, weights);
//else if(0==strcmp(argv[2], "valid")) validate_captcha(cfg, weights);
}

View File

@ -0,0 +1,251 @@
#include "darknet.h"
void train_cifar(char *cfgfile, char *weightfile)
{
srand(time(0));
float avg_loss = -1;
char *base = basecfg(cfgfile);
printf("%s\n", base);
network *net = load_network(cfgfile, weightfile, 0);
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
char *backup_directory = "/home/pjreddie/backup/";
int classes = 10;
int N = 50000;
char **labels = get_labels("data/cifar/labels.txt");
int epoch = (*net->seen)/N;
data train = load_all_cifar10();
while(get_current_batch(net) < net->max_batches || net->max_batches == 0){
clock_t time=clock();
float loss = train_network_sgd(net, train, 1);
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.95 + loss*.05;
printf("%ld, %.3f: %f, %f avg, %f rate, %lf seconds, %ld images\n", get_current_batch(net), (float)(*net->seen)/N, loss, avg_loss, get_current_rate(net), sec(clock()-time), *net->seen);
if(*net->seen/N > epoch){
epoch = *net->seen/N;
char buff[256];
sprintf(buff, "%s/%s_%d.weights",backup_directory,base, epoch);
save_weights(net, buff);
}
if(get_current_batch(net)%100 == 0){
char buff[256];
sprintf(buff, "%s/%s.backup",backup_directory,base);
save_weights(net, buff);
}
}
char buff[256];
sprintf(buff, "%s/%s.weights", backup_directory, base);
save_weights(net, buff);
free_network(net);
free_ptrs((void**)labels, classes);
free(base);
free_data(train);
}
void train_cifar_distill(char *cfgfile, char *weightfile)
{
srand(time(0));
float avg_loss = -1;
char *base = basecfg(cfgfile);
printf("%s\n", base);
network *net = load_network(cfgfile, weightfile, 0);
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
char *backup_directory = "/home/pjreddie/backup/";
int classes = 10;
int N = 50000;
char **labels = get_labels("data/cifar/labels.txt");
int epoch = (*net->seen)/N;
data train = load_all_cifar10();
matrix soft = csv_to_matrix("results/ensemble.csv");
float weight = .9;
scale_matrix(soft, weight);
scale_matrix(train.y, 1. - weight);
matrix_add_matrix(soft, train.y);
while(get_current_batch(net) < net->max_batches || net->max_batches == 0){
clock_t time=clock();
float loss = train_network_sgd(net, train, 1);
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.95 + loss*.05;
printf("%ld, %.3f: %f, %f avg, %f rate, %lf seconds, %ld images\n", get_current_batch(net), (float)(*net->seen)/N, loss, avg_loss, get_current_rate(net), sec(clock()-time), *net->seen);
if(*net->seen/N > epoch){
epoch = *net->seen/N;
char buff[256];
sprintf(buff, "%s/%s_%d.weights",backup_directory,base, epoch);
save_weights(net, buff);
}
if(get_current_batch(net)%100 == 0){
char buff[256];
sprintf(buff, "%s/%s.backup",backup_directory,base);
save_weights(net, buff);
}
}
char buff[256];
sprintf(buff, "%s/%s.weights", backup_directory, base);
save_weights(net, buff);
free_network(net);
free_ptrs((void**)labels, classes);
free(base);
free_data(train);
}
void test_cifar_multi(char *filename, char *weightfile)
{
network *net = load_network(filename, weightfile, 0);
set_batch_network(net, 1);
srand(time(0));
float avg_acc = 0;
data test = load_cifar10_data("data/cifar/cifar-10-batches-bin/test_batch.bin");
int i;
for(i = 0; i < test.X.rows; ++i){
image im = float_to_image(32, 32, 3, test.X.vals[i]);
float pred[10] = {0};
float *p = network_predict(net, im.data);
axpy_cpu(10, 1, p, 1, pred, 1);
flip_image(im);
p = network_predict(net, im.data);
axpy_cpu(10, 1, p, 1, pred, 1);
int index = max_index(pred, 10);
int class_id = max_index(test.y.vals[i], 10);
if(index == class_id) avg_acc += 1;
free_image(im);
printf("%4d: %.2f%%\n", i, 100.*avg_acc/(i+1));
}
}
void test_cifar(char *filename, char *weightfile)
{
network *net = load_network(filename, weightfile, 0);
srand(time(0));
clock_t time;
float avg_acc = 0;
float avg_top5 = 0;
data test = load_cifar10_data("data/cifar/cifar-10-batches-bin/test_batch.bin");
time=clock();
float *acc = network_accuracies(net, test, 2);
avg_acc += acc[0];
avg_top5 += acc[1];
printf("top1: %f, %lf seconds, %d images\n", avg_acc, sec(clock()-time), test.X.rows);
free_data(test);
}
void extract_cifar()
{
char *labels[] = {"airplane","automobile","bird","cat","deer","dog","frog","horse","ship","truck"};
int i;
data train = load_all_cifar10();
data test = load_cifar10_data("data/cifar/cifar-10-batches-bin/test_batch.bin");
for(i = 0; i < train.X.rows; ++i){
image im = float_to_image(32, 32, 3, train.X.vals[i]);
int class_id = max_index(train.y.vals[i], 10);
char buff[256];
sprintf(buff, "data/cifar/train/%d_%s",i,labels[class_id]);
save_image_options(im, buff, PNG, 0);
}
for(i = 0; i < test.X.rows; ++i){
image im = float_to_image(32, 32, 3, test.X.vals[i]);
int class_id = max_index(test.y.vals[i], 10);
char buff[256];
sprintf(buff, "data/cifar/test/%d_%s",i,labels[class_id]);
save_image_options(im, buff, PNG, 0);
}
}
void test_cifar_csv(char *filename, char *weightfile)
{
network *net = load_network(filename, weightfile, 0);
srand(time(0));
data test = load_cifar10_data("data/cifar/cifar-10-batches-bin/test_batch.bin");
matrix pred = network_predict_data(net, test);
int i;
for(i = 0; i < test.X.rows; ++i){
image im = float_to_image(32, 32, 3, test.X.vals[i]);
flip_image(im);
}
matrix pred2 = network_predict_data(net, test);
scale_matrix(pred, .5);
scale_matrix(pred2, .5);
matrix_add_matrix(pred2, pred);
matrix_to_csv(pred);
fprintf(stderr, "Accuracy: %f\n", matrix_topk_accuracy(test.y, pred, 1));
free_data(test);
}
void test_cifar_csvtrain(char *cfg, char *weights)
{
network *net = load_network(cfg, weights, 0);
srand(time(0));
data test = load_all_cifar10();
matrix pred = network_predict_data(net, test);
int i;
for(i = 0; i < test.X.rows; ++i){
image im = float_to_image(32, 32, 3, test.X.vals[i]);
flip_image(im);
}
matrix pred2 = network_predict_data(net, test);
scale_matrix(pred, .5);
scale_matrix(pred2, .5);
matrix_add_matrix(pred2, pred);
matrix_to_csv(pred);
fprintf(stderr, "Accuracy: %f\n", matrix_topk_accuracy(test.y, pred, 1));
free_data(test);
}
void eval_cifar_csv()
{
data test = load_cifar10_data("data/cifar/cifar-10-batches-bin/test_batch.bin");
matrix pred = csv_to_matrix("results/combined.csv");
fprintf(stderr, "%d %d\n", pred.rows, pred.cols);
fprintf(stderr, "Accuracy: %f\n", matrix_topk_accuracy(test.y, pred, 1));
free_data(test);
free_matrix(pred);
}
void run_cifar(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
if(0==strcmp(argv[2], "train")) train_cifar(cfg, weights);
else if(0==strcmp(argv[2], "extract")) extract_cifar();
else if(0==strcmp(argv[2], "distill")) train_cifar_distill(cfg, weights);
else if(0==strcmp(argv[2], "test")) test_cifar(cfg, weights);
else if(0==strcmp(argv[2], "multi")) test_cifar_multi(cfg, weights);
else if(0==strcmp(argv[2], "csv")) test_cifar_csv(cfg, weights);
else if(0==strcmp(argv[2], "csvtrain")) test_cifar_csvtrain(cfg, weights);
else if(0==strcmp(argv[2], "eval")) eval_cifar_csv();
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,357 @@
#include "darknet.h"
#include <stdio.h>
char *coco_classes[] = {"person","bicycle","car","motorcycle","airplane","bus","train","truck","boat","traffic light","fire hydrant","stop sign","parking meter","bench","bird","cat","dog","horse","sheep","cow","elephant","bear","zebra","giraffe","backpack","umbrella","handbag","tie","suitcase","frisbee","skis","snowboard","sports ball","kite","baseball bat","baseball glove","skateboard","surfboard","tennis racket","bottle","wine glass","cup","fork","knife","spoon","bowl","banana","apple","sandwich","orange","broccoli","carrot","hot dog","pizza","donut","cake","chair","couch","potted plant","bed","dining table","toilet","tv","laptop","mouse","remote","keyboard","cell phone","microwave","oven","toaster","sink","refrigerator","book","clock","vase","scissors","teddy bear","hair drier","toothbrush"};
int coco_ids[] = {1,2,3,4,5,6,7,8,9,10,11,13,14,15,16,17,18,19,20,21,22,23,24,25,27,28,31,32,33,34,35,36,37,38,39,40,41,42,43,44,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,67,70,72,73,74,75,76,77,78,79,80,81,82,84,85,86,87,88,89,90};
void train_coco(char *cfgfile, char *weightfile)
{
//char *train_images = "/home/pjreddie/data/voc/test/train.txt";
//char *train_images = "/home/pjreddie/data/coco/train.txt";
char *train_images = "data/coco.trainval.txt";
//char *train_images = "data/bags.train.list";
char *backup_directory = "/home/pjreddie/backup/";
srand(time(0));
char *base = basecfg(cfgfile);
printf("%s\n", base);
float avg_loss = -1;
network *net = load_network(cfgfile, weightfile, 0);
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
int imgs = net->batch*net->subdivisions;
int i = *net->seen/imgs;
data train, buffer;
layer l = net->layers[net->n - 1];
int side = l.side;
int classes = l.classes;
float jitter = l.jitter;
list *plist = get_paths(train_images);
//int N = plist->size;
char **paths = (char **)list_to_array(plist);
load_args args = {0};
args.w = net->w;
args.h = net->h;
args.paths = paths;
args.n = imgs;
args.m = plist->size;
args.classes = classes;
args.jitter = jitter;
args.num_boxes = side;
args.d = &buffer;
args.type = REGION_DATA;
args.angle = net->angle;
args.exposure = net->exposure;
args.saturation = net->saturation;
args.hue = net->hue;
pthread_t load_thread = load_data_in_thread(args);
clock_t time;
//while(i*imgs < N*120){
while(get_current_batch(net) < net->max_batches){
i += 1;
time=clock();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data_in_thread(args);
printf("Loaded: %lf seconds\n", sec(clock()-time));
/*
image im = float_to_image(net->w, net->h, 3, train.X.vals[113]);
image copy = copy_image(im);
draw_coco(copy, train.y.vals[113], 7, "truth");
cvWaitKey(0);
free_image(copy);
*/
time=clock();
float loss = train_network(net, train);
if (avg_loss < 0) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%d: %f, %f avg, %f rate, %lf seconds, %d images\n", i, loss, avg_loss, get_current_rate(net), sec(clock()-time), i*imgs);
if(i%1000==0 || (i < 1000 && i%100 == 0)){
char buff[256];
sprintf(buff, "%s/%s_%d.weights", backup_directory, base, i);
save_weights(net, buff);
}
if(i%100==0){
char buff[256];
sprintf(buff, "%s/%s.backup", backup_directory, base);
save_weights(net, buff);
}
free_data(train);
}
char buff[256];
sprintf(buff, "%s/%s_final.weights", backup_directory, base);
save_weights(net, buff);
}
static void print_cocos(FILE *fp, int image_id, detection *dets, int num_boxes, int classes, int w, int h)
{
int i, j;
for(i = 0; i < num_boxes; ++i){
float xmin = dets[i].bbox.x - dets[i].bbox.w/2.;
float xmax = dets[i].bbox.x + dets[i].bbox.w/2.;
float ymin = dets[i].bbox.y - dets[i].bbox.h/2.;
float ymax = dets[i].bbox.y + dets[i].bbox.h/2.;
if (xmin < 0) xmin = 0;
if (ymin < 0) ymin = 0;
if (xmax > w) xmax = w;
if (ymax > h) ymax = h;
float bx = xmin;
float by = ymin;
float bw = xmax - xmin;
float bh = ymax - ymin;
for(j = 0; j < classes; ++j){
if (dets[i].prob[j]) fprintf(fp, "{\"image_id\":%d, \"category_id\":%d, \"bbox\":[%f, %f, %f, %f], \"score\":%f},\n", image_id, coco_ids[j], bx, by, bw, bh, dets[i].prob[j]);
}
}
}
int get_coco_image_id(char *filename)
{
char *p = strrchr(filename, '_');
return atoi(p+1);
}
void validate_coco(char *cfg, char *weights)
{
network *net = load_network(cfg, weights, 0);
set_batch_network(net, 1);
fprintf(stderr, "Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
srand(time(0));
char *base = "results/";
list *plist = get_paths("data/coco_val_5k.list");
//list *plist = get_paths("/home/pjreddie/data/people-art/test.txt");
//list *plist = get_paths("/home/pjreddie/data/voc/test/2007_test.txt");
char **paths = (char **)list_to_array(plist);
layer l = net->layers[net->n-1];
int classes = l.classes;
char buff[1024];
snprintf(buff, 1024, "%s/coco_results.json", base);
FILE *fp = fopen(buff, "w");
fprintf(fp, "[\n");
int m = plist->size;
int i=0;
int t;
float thresh = .01;
int nms = 1;
float iou_thresh = .5;
int nthreads = 8;
image *val = calloc(nthreads, sizeof(image));
image *val_resized = calloc(nthreads, sizeof(image));
image *buf = calloc(nthreads, sizeof(image));
image *buf_resized = calloc(nthreads, sizeof(image));
pthread_t *thr = calloc(nthreads, sizeof(pthread_t));
load_args args = {0};
args.w = net->w;
args.h = net->h;
args.type = IMAGE_DATA;
for(t = 0; t < nthreads; ++t){
args.path = paths[i+t];
args.im = &buf[t];
args.resized = &buf_resized[t];
thr[t] = load_data_in_thread(args);
}
time_t start = time(0);
for(i = nthreads; i < m+nthreads; i += nthreads){
fprintf(stderr, "%d\n", i);
for(t = 0; t < nthreads && i+t-nthreads < m; ++t){
pthread_join(thr[t], 0);
val[t] = buf[t];
val_resized[t] = buf_resized[t];
}
for(t = 0; t < nthreads && i+t < m; ++t){
args.path = paths[i+t];
args.im = &buf[t];
args.resized = &buf_resized[t];
thr[t] = load_data_in_thread(args);
}
for(t = 0; t < nthreads && i+t-nthreads < m; ++t){
char *path = paths[i+t-nthreads];
int image_id = get_coco_image_id(path);
float *X = val_resized[t].data;
network_predict(net, X);
int w = val[t].w;
int h = val[t].h;
int nboxes = 0;
detection *dets = get_network_boxes(net, w, h, thresh, 0, 0, 0, &nboxes);
if (nms) do_nms_sort(dets, l.side*l.side*l.n, classes, iou_thresh);
print_cocos(fp, image_id, dets, l.side*l.side*l.n, classes, w, h);
free_detections(dets, nboxes);
free_image(val[t]);
free_image(val_resized[t]);
}
}
fseek(fp, -2, SEEK_CUR);
fprintf(fp, "\n]\n");
fclose(fp);
fprintf(stderr, "Total Detection Time: %f Seconds\n", (double)(time(0) - start));
}
void validate_coco_recall(char *cfgfile, char *weightfile)
{
network *net = load_network(cfgfile, weightfile, 0);
set_batch_network(net, 1);
fprintf(stderr, "Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
srand(time(0));
char *base = "results/comp4_det_test_";
list *plist = get_paths("/home/pjreddie/data/voc/test/2007_test.txt");
char **paths = (char **)list_to_array(plist);
layer l = net->layers[net->n-1];
int classes = l.classes;
int side = l.side;
int j, k;
FILE **fps = calloc(classes, sizeof(FILE *));
for(j = 0; j < classes; ++j){
char buff[1024];
snprintf(buff, 1024, "%s%s.txt", base, coco_classes[j]);
fps[j] = fopen(buff, "w");
}
int m = plist->size;
int i=0;
float thresh = .001;
int nms = 0;
float iou_thresh = .5;
int total = 0;
int correct = 0;
int proposals = 0;
float avg_iou = 0;
for(i = 0; i < m; ++i){
char *path = paths[i];
image orig = load_image_color(path, 0, 0);
image sized = resize_image(orig, net->w, net->h);
char *id = basecfg(path);
network_predict(net, sized.data);
int nboxes = 0;
detection *dets = get_network_boxes(net, orig.w, orig.h, thresh, 0, 0, 1, &nboxes);
if (nms) do_nms_obj(dets, side*side*l.n, 1, nms);
char labelpath[4096];
find_replace(path, "images", "labels", labelpath);
find_replace(labelpath, "JPEGImages", "labels", labelpath);
find_replace(labelpath, ".jpg", ".txt", labelpath);
find_replace(labelpath, ".JPEG", ".txt", labelpath);
int num_labels = 0;
box_label *truth = read_boxes(labelpath, &num_labels);
for(k = 0; k < side*side*l.n; ++k){
if(dets[k].objectness > thresh){
++proposals;
}
}
for (j = 0; j < num_labels; ++j) {
++total;
box t = {truth[j].x, truth[j].y, truth[j].w, truth[j].h};
float best_iou = 0;
for(k = 0; k < side*side*l.n; ++k){
float iou = box_iou(dets[k].bbox, t);
if(dets[k].objectness > thresh && iou > best_iou){
best_iou = iou;
}
}
avg_iou += best_iou;
if(best_iou > iou_thresh){
++correct;
}
}
free_detections(dets, nboxes);
fprintf(stderr, "%5d %5d %5d\tRPs/Img: %.2f\tIOU: %.2f%%\tRecall:%.2f%%\n", i, correct, total, (float)proposals/(i+1), avg_iou*100/total, 100.*correct/total);
free(id);
free_image(orig);
free_image(sized);
}
}
void test_coco(char *cfgfile, char *weightfile, char *filename, float thresh)
{
image **alphabet = load_alphabet();
network *net = load_network(cfgfile, weightfile, 0);
layer l = net->layers[net->n-1];
set_batch_network(net, 1);
srand(2222222);
float nms = .4;
clock_t time;
char buff[256];
char *input = buff;
while(1){
if(filename){
strncpy(input, filename, 256);
} else {
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input,0,0);
image sized = resize_image(im, net->w, net->h);
float *X = sized.data;
time=clock();
network_predict(net, X);
printf("%s: Predicted in %f seconds.\n", input, sec(clock()-time));
int nboxes = 0;
detection *dets = get_network_boxes(net, 1, 1, thresh, 0, 0, 0, &nboxes);
if (nms) do_nms_sort(dets, l.side*l.side*l.n, l.classes, nms);
draw_detections(im, dets, l.side*l.side*l.n, thresh, coco_classes, alphabet, 80);
save_image(im, "prediction");
show_image(im, "predictions", 0);
free_detections(dets, nboxes);
free_image(im);
free_image(sized);
if (filename) break;
}
}
void run_coco(int argc, char **argv)
{
char *prefix = find_char_arg(argc, argv, "-prefix", 0);
float thresh = find_float_arg(argc, argv, "-thresh", .2);
int cam_index = find_int_arg(argc, argv, "-c", 0);
int frame_skip = find_int_arg(argc, argv, "-s", 0);
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
char *filename = (argc > 5) ? argv[5]: 0;
int avg = find_int_arg(argc, argv, "-avg", 1);
if(0==strcmp(argv[2], "test")) test_coco(cfg, weights, filename, thresh);
else if(0==strcmp(argv[2], "train")) train_coco(cfg, weights);
else if(0==strcmp(argv[2], "valid")) validate_coco(cfg, weights);
else if(0==strcmp(argv[2], "recall")) validate_coco_recall(cfg, weights);
else if(0==strcmp(argv[2], "demo")) demo(cfg, weights, thresh, cam_index, filename, coco_classes, 80, frame_skip, prefix, avg, .5, 0,0,0,0);
}

View File

@ -0,0 +1,503 @@
#include "darknet.h"
#include <time.h>
#include <stdlib.h>
#include <stdio.h>
extern void predict_classifier(char *datacfg, char *cfgfile, char *weightfile, char *filename, int top);
extern void test_detector(char *datacfg, char *cfgfile, char *weightfile, char *filename, float thresh, float hier_thresh, char *outfile, int fullscreen);
extern void run_yolo(int argc, char **argv);
extern void run_detector(int argc, char **argv);
extern void run_coco(int argc, char **argv);
extern void run_nightmare(int argc, char **argv);
extern void run_classifier(int argc, char **argv);
extern void run_regressor(int argc, char **argv);
extern void run_segmenter(int argc, char **argv);
extern void run_isegmenter(int argc, char **argv);
extern void run_char_rnn(int argc, char **argv);
extern void run_tag(int argc, char **argv);
extern void run_cifar(int argc, char **argv);
extern void run_go(int argc, char **argv);
extern void run_art(int argc, char **argv);
extern void run_super(int argc, char **argv);
extern void run_lsd(int argc, char **argv);
void average(int argc, char *argv[])
{
char *cfgfile = argv[2];
char *outfile = argv[3];
gpu_index = -1;
network *net = parse_network_cfg(cfgfile);
network *sum = parse_network_cfg(cfgfile);
char *weightfile = argv[4];
load_weights(sum, weightfile);
int i, j;
int n = argc - 5;
for(i = 0; i < n; ++i){
weightfile = argv[i+5];
load_weights(net, weightfile);
for(j = 0; j < net->n; ++j){
layer l = net->layers[j];
layer out = sum->layers[j];
if(l.type == CONVOLUTIONAL){
int num = l.n*l.c*l.size*l.size;
axpy_cpu(l.n, 1, l.biases, 1, out.biases, 1);
axpy_cpu(num, 1, l.weights, 1, out.weights, 1);
if(l.batch_normalize){
axpy_cpu(l.n, 1, l.scales, 1, out.scales, 1);
axpy_cpu(l.n, 1, l.rolling_mean, 1, out.rolling_mean, 1);
axpy_cpu(l.n, 1, l.rolling_variance, 1, out.rolling_variance, 1);
}
}
if(l.type == CONNECTED){
axpy_cpu(l.outputs, 1, l.biases, 1, out.biases, 1);
axpy_cpu(l.outputs*l.inputs, 1, l.weights, 1, out.weights, 1);
}
}
}
n = n+1;
for(j = 0; j < net->n; ++j){
layer l = sum->layers[j];
if(l.type == CONVOLUTIONAL){
int num = l.n*l.c*l.size*l.size;
scal_cpu(l.n, 1./n, l.biases, 1);
scal_cpu(num, 1./n, l.weights, 1);
if(l.batch_normalize){
scal_cpu(l.n, 1./n, l.scales, 1);
scal_cpu(l.n, 1./n, l.rolling_mean, 1);
scal_cpu(l.n, 1./n, l.rolling_variance, 1);
}
}
if(l.type == CONNECTED){
scal_cpu(l.outputs, 1./n, l.biases, 1);
scal_cpu(l.outputs*l.inputs, 1./n, l.weights, 1);
}
}
save_weights(sum, outfile);
}
long numops(network *net)
{
int i;
long ops = 0;
for(i = 0; i < net->n; ++i){
layer l = net->layers[i];
if(l.type == CONVOLUTIONAL){
ops += 2l * l.n * l.size*l.size*l.c/l.groups * l.out_h*l.out_w;
} else if(l.type == CONNECTED){
ops += 2l * l.inputs * l.outputs;
} else if (l.type == RNN){
ops += 2l * l.input_layer->inputs * l.input_layer->outputs;
ops += 2l * l.self_layer->inputs * l.self_layer->outputs;
ops += 2l * l.output_layer->inputs * l.output_layer->outputs;
} else if (l.type == GRU){
ops += 2l * l.uz->inputs * l.uz->outputs;
ops += 2l * l.uh->inputs * l.uh->outputs;
ops += 2l * l.ur->inputs * l.ur->outputs;
ops += 2l * l.wz->inputs * l.wz->outputs;
ops += 2l * l.wh->inputs * l.wh->outputs;
ops += 2l * l.wr->inputs * l.wr->outputs;
} else if (l.type == LSTM){
ops += 2l * l.uf->inputs * l.uf->outputs;
ops += 2l * l.ui->inputs * l.ui->outputs;
ops += 2l * l.ug->inputs * l.ug->outputs;
ops += 2l * l.uo->inputs * l.uo->outputs;
ops += 2l * l.wf->inputs * l.wf->outputs;
ops += 2l * l.wi->inputs * l.wi->outputs;
ops += 2l * l.wg->inputs * l.wg->outputs;
ops += 2l * l.wo->inputs * l.wo->outputs;
}
}
return ops;
}
void speed(char *cfgfile, int tics)
{
if (tics == 0) tics = 1000;
network *net = parse_network_cfg(cfgfile);
set_batch_network(net, 1);
int i;
double time=what_time_is_it_now();
image im = make_image(net->w, net->h, net->c*net->batch);
for(i = 0; i < tics; ++i){
network_predict(net, im.data);
}
double t = what_time_is_it_now() - time;
long ops = numops(net);
printf("\n%d evals, %f Seconds\n", tics, t);
printf("Floating Point Operations: %.2f Bn\n", (float)ops/1000000000.);
printf("FLOPS: %.2f Bn\n", (float)ops/1000000000.*tics/t);
printf("Speed: %f sec/eval\n", t/tics);
printf("Speed: %f Hz\n", tics/t);
}
void operations(char *cfgfile)
{
gpu_index = -1;
network *net = parse_network_cfg(cfgfile);
long ops = numops(net);
printf("Floating Point Operations: %ld\n", ops);
printf("Floating Point Operations: %.2f Bn\n", (float)ops/1000000000.);
}
void oneoff(char *cfgfile, char *weightfile, char *outfile)
{
gpu_index = -1;
network *net = parse_network_cfg(cfgfile);
int oldn = net->layers[net->n - 2].n;
int c = net->layers[net->n - 2].c;
scal_cpu(oldn*c, .1, net->layers[net->n - 2].weights, 1);
scal_cpu(oldn, 0, net->layers[net->n - 2].biases, 1);
net->layers[net->n - 2].n = 11921;
net->layers[net->n - 2].biases += 5;
net->layers[net->n - 2].weights += 5*c;
if(weightfile){
load_weights(net, weightfile);
}
net->layers[net->n - 2].biases -= 5;
net->layers[net->n - 2].weights -= 5*c;
net->layers[net->n - 2].n = oldn;
printf("%d\n", oldn);
layer l = net->layers[net->n - 2];
copy_cpu(l.n/3, l.biases, 1, l.biases + l.n/3, 1);
copy_cpu(l.n/3, l.biases, 1, l.biases + 2*l.n/3, 1);
copy_cpu(l.n/3*l.c, l.weights, 1, l.weights + l.n/3*l.c, 1);
copy_cpu(l.n/3*l.c, l.weights, 1, l.weights + 2*l.n/3*l.c, 1);
*net->seen = 0;
save_weights(net, outfile);
}
void oneoff2(char *cfgfile, char *weightfile, char *outfile, int l)
{
gpu_index = -1;
network *net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights_upto(net, weightfile, 0, net->n);
load_weights_upto(net, weightfile, l, net->n);
}
*net->seen = 0;
save_weights_upto(net, outfile, net->n);
}
void partial(char *cfgfile, char *weightfile, char *outfile, int max)
{
gpu_index = -1;
network *net = load_network(cfgfile, weightfile, 1);
save_weights_upto(net, outfile, max);
}
void print_weights(char *cfgfile, char *weightfile, int n)
{
gpu_index = -1;
network *net = load_network(cfgfile, weightfile, 1);
layer l = net->layers[n];
int i, j;
//printf("[");
for(i = 0; i < l.n; ++i){
//printf("[");
for(j = 0; j < l.size*l.size*l.c; ++j){
//if(j > 0) printf(",");
printf("%g ", l.weights[i*l.size*l.size*l.c + j]);
}
printf("\n");
//printf("]%s\n", (i == l.n-1)?"":",");
}
//printf("]");
}
void rescale_net(char *cfgfile, char *weightfile, char *outfile)
{
gpu_index = -1;
network *net = load_network(cfgfile, weightfile, 0);
int i;
for(i = 0; i < net->n; ++i){
layer l = net->layers[i];
if(l.type == CONVOLUTIONAL){
rescale_weights(l, 2, -.5);
break;
}
}
save_weights(net, outfile);
}
void rgbgr_net(char *cfgfile, char *weightfile, char *outfile)
{
gpu_index = -1;
network *net = load_network(cfgfile, weightfile, 0);
int i;
for(i = 0; i < net->n; ++i){
layer l = net->layers[i];
if(l.type == CONVOLUTIONAL){
rgbgr_weights(l);
break;
}
}
save_weights(net, outfile);
}
void reset_normalize_net(char *cfgfile, char *weightfile, char *outfile)
{
gpu_index = -1;
network *net = load_network(cfgfile, weightfile, 0);
int i;
for (i = 0; i < net->n; ++i) {
layer l = net->layers[i];
if (l.type == CONVOLUTIONAL && l.batch_normalize) {
denormalize_convolutional_layer(l);
}
if (l.type == CONNECTED && l.batch_normalize) {
denormalize_connected_layer(l);
}
if (l.type == GRU && l.batch_normalize) {
denormalize_connected_layer(*l.input_z_layer);
denormalize_connected_layer(*l.input_r_layer);
denormalize_connected_layer(*l.input_h_layer);
denormalize_connected_layer(*l.state_z_layer);
denormalize_connected_layer(*l.state_r_layer);
denormalize_connected_layer(*l.state_h_layer);
}
}
save_weights(net, outfile);
}
layer normalize_layer(layer l, int n)
{
int j;
l.batch_normalize=1;
l.scales = calloc(n, sizeof(float));
for(j = 0; j < n; ++j){
l.scales[j] = 1;
}
l.rolling_mean = calloc(n, sizeof(float));
l.rolling_variance = calloc(n, sizeof(float));
return l;
}
void normalize_net(char *cfgfile, char *weightfile, char *outfile)
{
gpu_index = -1;
network *net = load_network(cfgfile, weightfile, 0);
int i;
for(i = 0; i < net->n; ++i){
layer l = net->layers[i];
if(l.type == CONVOLUTIONAL && !l.batch_normalize){
net->layers[i] = normalize_layer(l, l.n);
}
if (l.type == CONNECTED && !l.batch_normalize) {
net->layers[i] = normalize_layer(l, l.outputs);
}
if (l.type == GRU && l.batch_normalize) {
*l.input_z_layer = normalize_layer(*l.input_z_layer, l.input_z_layer->outputs);
*l.input_r_layer = normalize_layer(*l.input_r_layer, l.input_r_layer->outputs);
*l.input_h_layer = normalize_layer(*l.input_h_layer, l.input_h_layer->outputs);
*l.state_z_layer = normalize_layer(*l.state_z_layer, l.state_z_layer->outputs);
*l.state_r_layer = normalize_layer(*l.state_r_layer, l.state_r_layer->outputs);
*l.state_h_layer = normalize_layer(*l.state_h_layer, l.state_h_layer->outputs);
net->layers[i].batch_normalize=1;
}
}
save_weights(net, outfile);
}
void statistics_net(char *cfgfile, char *weightfile)
{
gpu_index = -1;
network *net = load_network(cfgfile, weightfile, 0);
int i;
for (i = 0; i < net->n; ++i) {
layer l = net->layers[i];
if (l.type == CONNECTED && l.batch_normalize) {
printf("Connected Layer %d\n", i);
statistics_connected_layer(l);
}
if (l.type == GRU && l.batch_normalize) {
printf("GRU Layer %d\n", i);
printf("Input Z\n");
statistics_connected_layer(*l.input_z_layer);
printf("Input R\n");
statistics_connected_layer(*l.input_r_layer);
printf("Input H\n");
statistics_connected_layer(*l.input_h_layer);
printf("State Z\n");
statistics_connected_layer(*l.state_z_layer);
printf("State R\n");
statistics_connected_layer(*l.state_r_layer);
printf("State H\n");
statistics_connected_layer(*l.state_h_layer);
}
printf("\n");
}
}
void denormalize_net(char *cfgfile, char *weightfile, char *outfile)
{
gpu_index = -1;
network *net = load_network(cfgfile, weightfile, 0);
int i;
for (i = 0; i < net->n; ++i) {
layer l = net->layers[i];
if ((l.type == DECONVOLUTIONAL || l.type == CONVOLUTIONAL) && l.batch_normalize) {
denormalize_convolutional_layer(l);
net->layers[i].batch_normalize=0;
}
if (l.type == CONNECTED && l.batch_normalize) {
denormalize_connected_layer(l);
net->layers[i].batch_normalize=0;
}
if (l.type == GRU && l.batch_normalize) {
denormalize_connected_layer(*l.input_z_layer);
denormalize_connected_layer(*l.input_r_layer);
denormalize_connected_layer(*l.input_h_layer);
denormalize_connected_layer(*l.state_z_layer);
denormalize_connected_layer(*l.state_r_layer);
denormalize_connected_layer(*l.state_h_layer);
l.input_z_layer->batch_normalize = 0;
l.input_r_layer->batch_normalize = 0;
l.input_h_layer->batch_normalize = 0;
l.state_z_layer->batch_normalize = 0;
l.state_r_layer->batch_normalize = 0;
l.state_h_layer->batch_normalize = 0;
net->layers[i].batch_normalize=0;
}
}
save_weights(net, outfile);
}
void mkimg(char *cfgfile, char *weightfile, int h, int w, int num, char *prefix)
{
network *net = load_network(cfgfile, weightfile, 0);
image *ims = get_weights(net->layers[0]);
int n = net->layers[0].n;
int z;
for(z = 0; z < num; ++z){
image im = make_image(h, w, 3);
fill_image(im, .5);
int i;
for(i = 0; i < 100; ++i){
image r = copy_image(ims[rand()%n]);
rotate_image_cw(r, rand()%4);
random_distort_image(r, 1, 1.5, 1.5);
int dx = rand()%(w-r.w);
int dy = rand()%(h-r.h);
ghost_image(r, im, dx, dy);
free_image(r);
}
char buff[256];
sprintf(buff, "%s/gen_%d", prefix, z);
save_image(im, buff);
free_image(im);
}
}
void visualize(char *cfgfile, char *weightfile)
{
network *net = load_network(cfgfile, weightfile, 0);
visualize_network(net);
}
int main(int argc, char **argv)
{
//test_resize("data/bad.jpg");
//test_box();
//test_convolutional_layer();
if(argc < 2){
fprintf(stderr, "usage: %s <function>\n", argv[0]);
return 0;
}
gpu_index = find_int_arg(argc, argv, "-i", 0);
if(find_arg(argc, argv, "-nogpu")) {
gpu_index = -1;
}
#ifndef GPU
gpu_index = -1;
#else
if(gpu_index >= 0){
cuda_set_device(gpu_index);
}
#endif
if (0 == strcmp(argv[1], "average")){
average(argc, argv);
} else if (0 == strcmp(argv[1], "yolo")){
run_yolo(argc, argv);
} else if (0 == strcmp(argv[1], "super")){
run_super(argc, argv);
} else if (0 == strcmp(argv[1], "lsd")){
run_lsd(argc, argv);
} else if (0 == strcmp(argv[1], "detector")){
run_detector(argc, argv);
} else if (0 == strcmp(argv[1], "detect")){
float thresh = find_float_arg(argc, argv, "-thresh", .5);
char *filename = (argc > 4) ? argv[4]: 0;
char *outfile = find_char_arg(argc, argv, "-out", 0);
int fullscreen = find_arg(argc, argv, "-fullscreen");
test_detector("cfg/coco.data", argv[2], argv[3], filename, thresh, .5, outfile, fullscreen);
} else if (0 == strcmp(argv[1], "cifar")){
run_cifar(argc, argv);
} else if (0 == strcmp(argv[1], "go")){
run_go(argc, argv);
} else if (0 == strcmp(argv[1], "rnn")){
run_char_rnn(argc, argv);
} else if (0 == strcmp(argv[1], "coco")){
run_coco(argc, argv);
} else if (0 == strcmp(argv[1], "classify")){
predict_classifier("cfg/imagenet1k.data", argv[2], argv[3], argv[4], 5);
} else if (0 == strcmp(argv[1], "classifier")){
run_classifier(argc, argv);
} else if (0 == strcmp(argv[1], "regressor")){
run_regressor(argc, argv);
} else if (0 == strcmp(argv[1], "isegmenter")){
run_isegmenter(argc, argv);
} else if (0 == strcmp(argv[1], "segmenter")){
run_segmenter(argc, argv);
} else if (0 == strcmp(argv[1], "art")){
run_art(argc, argv);
} else if (0 == strcmp(argv[1], "tag")){
run_tag(argc, argv);
} else if (0 == strcmp(argv[1], "3d")){
composite_3d(argv[2], argv[3], argv[4], (argc > 5) ? atof(argv[5]) : 0);
} else if (0 == strcmp(argv[1], "test")){
test_resize(argv[2]);
} else if (0 == strcmp(argv[1], "nightmare")){
run_nightmare(argc, argv);
} else if (0 == strcmp(argv[1], "rgbgr")){
rgbgr_net(argv[2], argv[3], argv[4]);
} else if (0 == strcmp(argv[1], "reset")){
reset_normalize_net(argv[2], argv[3], argv[4]);
} else if (0 == strcmp(argv[1], "denormalize")){
denormalize_net(argv[2], argv[3], argv[4]);
} else if (0 == strcmp(argv[1], "statistics")){
statistics_net(argv[2], argv[3]);
} else if (0 == strcmp(argv[1], "normalize")){
normalize_net(argv[2], argv[3], argv[4]);
} else if (0 == strcmp(argv[1], "rescale")){
rescale_net(argv[2], argv[3], argv[4]);
} else if (0 == strcmp(argv[1], "ops")){
operations(argv[2]);
} else if (0 == strcmp(argv[1], "speed")){
speed(argv[2], (argc > 3 && argv[3]) ? atoi(argv[3]) : 0);
} else if (0 == strcmp(argv[1], "oneoff")){
oneoff(argv[2], argv[3], argv[4]);
} else if (0 == strcmp(argv[1], "oneoff2")){
oneoff2(argv[2], argv[3], argv[4], atoi(argv[5]));
} else if (0 == strcmp(argv[1], "print")){
print_weights(argv[2], argv[3], atoi(argv[4]));
} else if (0 == strcmp(argv[1], "partial")){
partial(argv[2], argv[3], argv[4], atoi(argv[5]));
} else if (0 == strcmp(argv[1], "average")){
average(argc, argv);
} else if (0 == strcmp(argv[1], "visualize")){
visualize(argv[2], (argc > 3) ? argv[3] : 0);
} else if (0 == strcmp(argv[1], "mkimg")){
mkimg(argv[2], argv[3], atoi(argv[4]), atoi(argv[5]), atoi(argv[6]), argv[7]);
} else if (0 == strcmp(argv[1], "imtest")){
test_resize(argv[2]);
} else {
fprintf(stderr, "Not an option: %s\n", argv[1]);
}
return 0;
}

View File

@ -0,0 +1,56 @@
# Stupid python path shit.
# Instead just add darknet.py to somewhere in your python path
# OK actually that might not be a great idea, idk, work in progress
# Use at your own risk. or don't, i don't care
from scipy.misc import imread
import cv2
def array_to_image(arr):
arr = arr.transpose(2,0,1)
c = arr.shape[0]
h = arr.shape[1]
w = arr.shape[2]
arr = (arr/255.0).flatten()
data = dn.c_array(dn.c_float, arr)
im = dn.IMAGE(w,h,c,data)
return im
def detect2(net, meta, image, thresh=.5, hier_thresh=.5, nms=.45):
boxes = dn.make_boxes(net)
probs = dn.make_probs(net)
num = dn.num_boxes(net)
dn.network_detect(net, image, thresh, hier_thresh, nms, boxes, probs)
res = []
for j in range(num):
for i in range(meta.classes):
if probs[j][i] > 0:
res.append((meta.names[i], probs[j][i], (boxes[j].x, boxes[j].y, boxes[j].w, boxes[j].h)))
res = sorted(res, key=lambda x: -x[1])
dn.free_ptrs(dn.cast(probs, dn.POINTER(dn.c_void_p)), num)
return res
import sys, os
sys.path.append(os.path.join(os.getcwd(),'python/'))
import darknet as dn
# Darknet
net = dn.load_net("cfg/tiny-yolo.cfg", "tiny-yolo.weights", 0)
meta = dn.load_meta("cfg/coco.data")
r = dn.detect(net, meta, "data/dog.jpg")
print r
# scipy
arr= imread('data/dog.jpg')
im = array_to_image(arr)
r = detect2(net, meta, im)
print r
# OpenCV
arr = cv2.imread('data/dog.jpg')
im = array_to_image(arr)
dn.rgbgr_image(im)
r = detect2(net, meta, im)
print r

View File

@ -0,0 +1,850 @@
#include "darknet.h"
static int coco_ids[] = {1,2,3,4,5,6,7,8,9,10,11,13,14,15,16,17,18,19,20,21,22,23,24,25,27,28,31,32,33,34,35,36,37,38,39,40,41,42,43,44,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,67,70,72,73,74,75,76,77,78,79,80,81,82,84,85,86,87,88,89,90};
void train_detector(char *datacfg, char *cfgfile, char *weightfile, int *gpus, int ngpus, int clear)
{
list *options = read_data_cfg(datacfg);
char *train_images = option_find_str(options, "train", "data/train.list");
char *backup_directory = option_find_str(options, "backup", "/backup/");
srand(time(0));
char *base = basecfg(cfgfile);
printf("%s\n", base);
float avg_loss = -1;
network **nets = calloc(ngpus, sizeof(network));
srand(time(0));
int seed = rand();
int i;
for(i = 0; i < ngpus; ++i){
srand(seed);
#ifdef GPU
cuda_set_device(gpus[i]);
#endif
nets[i] = load_network(cfgfile, weightfile, clear);
nets[i]->learning_rate *= ngpus;
}
srand(time(0));
network *net = nets[0];
int imgs = net->batch * net->subdivisions * ngpus;
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
data train, buffer;
layer l = net->layers[net->n - 1];
int classes = l.classes;
float jitter = l.jitter;
list *plist = get_paths(train_images);
//int N = plist->size;
char **paths = (char **)list_to_array(plist);
load_args args = get_base_args(net);
args.coords = l.coords;
args.paths = paths;
args.n = imgs;
args.m = plist->size;
args.classes = classes;
args.jitter = jitter;
args.num_boxes = l.max_boxes;
args.d = &buffer;
args.type = DETECTION_DATA;
//args.type = INSTANCE_DATA;
args.threads = 64;
pthread_t load_thread = load_data(args);
double time;
int count = 0;
//while(i*imgs < N*120){
while(get_current_batch(net) < net->max_batches){
if(l.random && count++%10 == 0){
printf("Resizing\n");
int dim = (rand() % 10 + 10) * 32;
if (get_current_batch(net)+200 > net->max_batches) dim = 608;
//int dim = (rand() % 4 + 16) * 32;
printf("%d\n", dim);
args.w = dim;
args.h = dim;
pthread_join(load_thread, 0);
train = buffer;
free_data(train);
load_thread = load_data(args);
#pragma omp parallel for
for(i = 0; i < ngpus; ++i){
resize_network(nets[i], dim, dim);
}
net = nets[0];
}
time=what_time_is_it_now();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data(args);
/*
int k;
for(k = 0; k < l.max_boxes; ++k){
box b = float_to_box(train.y.vals[10] + 1 + k*5);
if(!b.x) break;
printf("loaded: %f %f %f %f\n", b.x, b.y, b.w, b.h);
}
*/
/*
int zz;
for(zz = 0; zz < train.X.cols; ++zz){
image im = float_to_image(net->w, net->h, 3, train.X.vals[zz]);
int k;
for(k = 0; k < l.max_boxes; ++k){
box b = float_to_box(train.y.vals[zz] + k*5, 1);
printf("%f %f %f %f\n", b.x, b.y, b.w, b.h);
draw_bbox(im, b, 1, 1,0,0);
}
show_image(im, "truth11");
cvWaitKey(0);
save_image(im, "truth11");
}
*/
printf("Loaded: %lf seconds\n", what_time_is_it_now()-time);
time=what_time_is_it_now();
float loss = 0;
#ifdef GPU
if(ngpus == 1){
loss = train_network(net, train);
} else {
loss = train_networks(nets, ngpus, train, 4);
}
#else
loss = train_network(net, train);
#endif
if (avg_loss < 0) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
i = get_current_batch(net);
printf("%ld: %f, %f avg, %f rate, %lf seconds, %d images\n", get_current_batch(net), loss, avg_loss, get_current_rate(net), what_time_is_it_now()-time, i*imgs);
if(i%100==0){
#ifdef GPU
if(ngpus != 1) sync_nets(nets, ngpus, 0);
#endif
char buff[256];
sprintf(buff, "%s/%s.backup", backup_directory, base);
save_weights(net, buff);
}
if(i%10000==0 || (i < 1000 && i%100 == 0)){
#ifdef GPU
if(ngpus != 1) sync_nets(nets, ngpus, 0);
#endif
char buff[256];
sprintf(buff, "%s/%s_%d.weights", backup_directory, base, i);
save_weights(net, buff);
}
free_data(train);
}
#ifdef GPU
if(ngpus != 1) sync_nets(nets, ngpus, 0);
#endif
char buff[256];
sprintf(buff, "%s/%s_final.weights", backup_directory, base);
save_weights(net, buff);
}
static int get_coco_image_id(char *filename)
{
char *p = strrchr(filename, '/');
char *c = strrchr(filename, '_');
if(c) p = c;
return atoi(p+1);
}
static void print_cocos(FILE *fp, char *image_path, detection *dets, int num_boxes, int classes, int w, int h)
{
int i, j;
int image_id = get_coco_image_id(image_path);
for(i = 0; i < num_boxes; ++i){
float xmin = dets[i].bbox.x - dets[i].bbox.w/2.;
float xmax = dets[i].bbox.x + dets[i].bbox.w/2.;
float ymin = dets[i].bbox.y - dets[i].bbox.h/2.;
float ymax = dets[i].bbox.y + dets[i].bbox.h/2.;
if (xmin < 0) xmin = 0;
if (ymin < 0) ymin = 0;
if (xmax > w) xmax = w;
if (ymax > h) ymax = h;
float bx = xmin;
float by = ymin;
float bw = xmax - xmin;
float bh = ymax - ymin;
for(j = 0; j < classes; ++j){
if (dets[i].prob[j]) fprintf(fp, "{\"image_id\":%d, \"category_id\":%d, \"bbox\":[%f, %f, %f, %f], \"score\":%f},\n", image_id, coco_ids[j], bx, by, bw, bh, dets[i].prob[j]);
}
}
}
void print_detector_detections(FILE **fps, char *id, detection *dets, int total, int classes, int w, int h)
{
int i, j;
for(i = 0; i < total; ++i){
float xmin = dets[i].bbox.x - dets[i].bbox.w/2. + 1;
float xmax = dets[i].bbox.x + dets[i].bbox.w/2. + 1;
float ymin = dets[i].bbox.y - dets[i].bbox.h/2. + 1;
float ymax = dets[i].bbox.y + dets[i].bbox.h/2. + 1;
if (xmin < 1) xmin = 1;
if (ymin < 1) ymin = 1;
if (xmax > w) xmax = w;
if (ymax > h) ymax = h;
for(j = 0; j < classes; ++j){
if (dets[i].prob[j]) fprintf(fps[j], "%s %f %f %f %f %f\n", id, dets[i].prob[j],
xmin, ymin, xmax, ymax);
}
}
}
void print_imagenet_detections(FILE *fp, int id, detection *dets, int total, int classes, int w, int h)
{
int i, j;
for(i = 0; i < total; ++i){
float xmin = dets[i].bbox.x - dets[i].bbox.w/2.;
float xmax = dets[i].bbox.x + dets[i].bbox.w/2.;
float ymin = dets[i].bbox.y - dets[i].bbox.h/2.;
float ymax = dets[i].bbox.y + dets[i].bbox.h/2.;
if (xmin < 0) xmin = 0;
if (ymin < 0) ymin = 0;
if (xmax > w) xmax = w;
if (ymax > h) ymax = h;
for(j = 0; j < classes; ++j){
int class_id = j;
if (dets[i].prob[class_id]) fprintf(fp, "%d %d %f %f %f %f %f\n", id, j+1, dets[i].prob[class_id],
xmin, ymin, xmax, ymax);
}
}
}
void validate_detector_flip(char *datacfg, char *cfgfile, char *weightfile, char *outfile)
{
int j;
list *options = read_data_cfg(datacfg);
char *valid_images = option_find_str(options, "valid", "data/train.list");
char *name_list = option_find_str(options, "names", "data/names.list");
char *prefix = option_find_str(options, "results", "results");
char **names = get_labels(name_list);
char *mapf = option_find_str(options, "map", 0);
int *map = 0;
if (mapf) map = read_map(mapf);
network *net = load_network(cfgfile, weightfile, 0);
set_batch_network(net, 2);
fprintf(stderr, "Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
srand(time(0));
list *plist = get_paths(valid_images);
char **paths = (char **)list_to_array(plist);
layer l = net->layers[net->n-1];
int classes = l.classes;
char buff[1024];
char *type = option_find_str(options, "eval", "voc");
FILE *fp = 0;
FILE **fps = 0;
int coco = 0;
int imagenet = 0;
if(0==strcmp(type, "coco")){
if(!outfile) outfile = "coco_results";
snprintf(buff, 1024, "%s/%s.json", prefix, outfile);
fp = fopen(buff, "w");
fprintf(fp, "[\n");
coco = 1;
} else if(0==strcmp(type, "imagenet")){
if(!outfile) outfile = "imagenet-detection";
snprintf(buff, 1024, "%s/%s.txt", prefix, outfile);
fp = fopen(buff, "w");
imagenet = 1;
classes = 200;
} else {
if(!outfile) outfile = "comp4_det_test_";
fps = calloc(classes, sizeof(FILE *));
for(j = 0; j < classes; ++j){
snprintf(buff, 1024, "%s/%s%s.txt", prefix, outfile, names[j]);
fps[j] = fopen(buff, "w");
}
}
int m = plist->size;
int i=0;
int t;
float thresh = .005;
float nms = .45;
int nthreads = 4;
image *val = calloc(nthreads, sizeof(image));
image *val_resized = calloc(nthreads, sizeof(image));
image *buf = calloc(nthreads, sizeof(image));
image *buf_resized = calloc(nthreads, sizeof(image));
pthread_t *thr = calloc(nthreads, sizeof(pthread_t));
image input = make_image(net->w, net->h, net->c*2);
load_args args = {0};
args.w = net->w;
args.h = net->h;
//args.type = IMAGE_DATA;
args.type = LETTERBOX_DATA;
for(t = 0; t < nthreads; ++t){
args.path = paths[i+t];
args.im = &buf[t];
args.resized = &buf_resized[t];
thr[t] = load_data_in_thread(args);
}
double start = what_time_is_it_now();
for(i = nthreads; i < m+nthreads; i += nthreads){
fprintf(stderr, "%d\n", i);
for(t = 0; t < nthreads && i+t-nthreads < m; ++t){
pthread_join(thr[t], 0);
val[t] = buf[t];
val_resized[t] = buf_resized[t];
}
for(t = 0; t < nthreads && i+t < m; ++t){
args.path = paths[i+t];
args.im = &buf[t];
args.resized = &buf_resized[t];
thr[t] = load_data_in_thread(args);
}
for(t = 0; t < nthreads && i+t-nthreads < m; ++t){
char *path = paths[i+t-nthreads];
char *id = basecfg(path);
copy_cpu(net->w*net->h*net->c, val_resized[t].data, 1, input.data, 1);
flip_image(val_resized[t]);
copy_cpu(net->w*net->h*net->c, val_resized[t].data, 1, input.data + net->w*net->h*net->c, 1);
network_predict(net, input.data);
int w = val[t].w;
int h = val[t].h;
int num = 0;
detection *dets = get_network_boxes(net, w, h, thresh, .5, map, 0, &num);
if (nms) do_nms_sort(dets, num, classes, nms);
if (coco){
print_cocos(fp, path, dets, num, classes, w, h);
} else if (imagenet){
print_imagenet_detections(fp, i+t-nthreads+1, dets, num, classes, w, h);
} else {
print_detector_detections(fps, id, dets, num, classes, w, h);
}
free_detections(dets, num);
free(id);
free_image(val[t]);
free_image(val_resized[t]);
}
}
for(j = 0; j < classes; ++j){
if(fps) fclose(fps[j]);
}
if(coco){
fseek(fp, -2, SEEK_CUR);
fprintf(fp, "\n]\n");
fclose(fp);
}
fprintf(stderr, "Total Detection Time: %f Seconds\n", what_time_is_it_now() - start);
}
void validate_detector(char *datacfg, char *cfgfile, char *weightfile, char *outfile)
{
int j;
list *options = read_data_cfg(datacfg);
char *valid_images = option_find_str(options, "valid", "data/train.list");
char *name_list = option_find_str(options, "names", "data/names.list");
char *prefix = option_find_str(options, "results", "results");
char **names = get_labels(name_list);
char *mapf = option_find_str(options, "map", 0);
int *map = 0;
if (mapf) map = read_map(mapf);
network *net = load_network(cfgfile, weightfile, 0);
set_batch_network(net, 1);
fprintf(stderr, "Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
srand(time(0));
list *plist = get_paths(valid_images);
char **paths = (char **)list_to_array(plist);
layer l = net->layers[net->n-1];
int classes = l.classes;
char buff[1024];
char *type = option_find_str(options, "eval", "voc");
FILE *fp = 0;
FILE **fps = 0;
int coco = 0;
int imagenet = 0;
if(0==strcmp(type, "coco")){
if(!outfile) outfile = "coco_results";
snprintf(buff, 1024, "%s/%s.json", prefix, outfile);
fp = fopen(buff, "w");
fprintf(fp, "[\n");
coco = 1;
} else if(0==strcmp(type, "imagenet")){
if(!outfile) outfile = "imagenet-detection";
snprintf(buff, 1024, "%s/%s.txt", prefix, outfile);
fp = fopen(buff, "w");
imagenet = 1;
classes = 200;
} else {
if(!outfile) outfile = "comp4_det_test_";
fps = calloc(classes, sizeof(FILE *));
for(j = 0; j < classes; ++j){
snprintf(buff, 1024, "%s/%s%s.txt", prefix, outfile, names[j]);
fps[j] = fopen(buff, "w");
}
}
int m = plist->size;
int i=0;
int t;
float thresh = .005;
float nms = .45;
int nthreads = 4;
image *val = calloc(nthreads, sizeof(image));
image *val_resized = calloc(nthreads, sizeof(image));
image *buf = calloc(nthreads, sizeof(image));
image *buf_resized = calloc(nthreads, sizeof(image));
pthread_t *thr = calloc(nthreads, sizeof(pthread_t));
load_args args = {0};
args.w = net->w;
args.h = net->h;
//args.type = IMAGE_DATA;
args.type = LETTERBOX_DATA;
for(t = 0; t < nthreads; ++t){
args.path = paths[i+t];
args.im = &buf[t];
args.resized = &buf_resized[t];
thr[t] = load_data_in_thread(args);
}
double start = what_time_is_it_now();
for(i = nthreads; i < m+nthreads; i += nthreads){
fprintf(stderr, "%d\n", i);
for(t = 0; t < nthreads && i+t-nthreads < m; ++t){
pthread_join(thr[t], 0);
val[t] = buf[t];
val_resized[t] = buf_resized[t];
}
for(t = 0; t < nthreads && i+t < m; ++t){
args.path = paths[i+t];
args.im = &buf[t];
args.resized = &buf_resized[t];
thr[t] = load_data_in_thread(args);
}
for(t = 0; t < nthreads && i+t-nthreads < m; ++t){
char *path = paths[i+t-nthreads];
char *id = basecfg(path);
float *X = val_resized[t].data;
network_predict(net, X);
int w = val[t].w;
int h = val[t].h;
int nboxes = 0;
detection *dets = get_network_boxes(net, w, h, thresh, .5, map, 0, &nboxes);
if (nms) do_nms_sort(dets, nboxes, classes, nms);
if (coco){
print_cocos(fp, path, dets, nboxes, classes, w, h);
} else if (imagenet){
print_imagenet_detections(fp, i+t-nthreads+1, dets, nboxes, classes, w, h);
} else {
print_detector_detections(fps, id, dets, nboxes, classes, w, h);
}
free_detections(dets, nboxes);
free(id);
free_image(val[t]);
free_image(val_resized[t]);
}
}
for(j = 0; j < classes; ++j){
if(fps) fclose(fps[j]);
}
if(coco){
fseek(fp, -2, SEEK_CUR);
fprintf(fp, "\n]\n");
fclose(fp);
}
fprintf(stderr, "Total Detection Time: %f Seconds\n", what_time_is_it_now() - start);
}
void validate_detector_recall(char *cfgfile, char *weightfile)
{
network *net = load_network(cfgfile, weightfile, 0);
set_batch_network(net, 1);
fprintf(stderr, "Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
srand(time(0));
list *plist = get_paths("data/coco_val_5k.list");
char **paths = (char **)list_to_array(plist);
layer l = net->layers[net->n-1];
int j, k;
int m = plist->size;
int i=0;
float thresh = .001;
float iou_thresh = .5;
float nms = .4;
int total = 0;
int correct = 0;
int proposals = 0;
float avg_iou = 0;
for(i = 0; i < m; ++i){
char *path = paths[i];
image orig = load_image_color(path, 0, 0);
image sized = resize_image(orig, net->w, net->h);
char *id = basecfg(path);
network_predict(net, sized.data);
int nboxes = 0;
detection *dets = get_network_boxes(net, sized.w, sized.h, thresh, .5, 0, 1, &nboxes);
if (nms) do_nms_obj(dets, nboxes, 1, nms);
char labelpath[4096];
find_replace(path, "images", "labels", labelpath);
find_replace(labelpath, "JPEGImages", "labels", labelpath);
find_replace(labelpath, ".jpg", ".txt", labelpath);
find_replace(labelpath, ".JPEG", ".txt", labelpath);
int num_labels = 0;
box_label *truth = read_boxes(labelpath, &num_labels);
for(k = 0; k < nboxes; ++k){
if(dets[k].objectness > thresh){
++proposals;
}
}
for (j = 0; j < num_labels; ++j) {
++total;
box t = {truth[j].x, truth[j].y, truth[j].w, truth[j].h};
float best_iou = 0;
for(k = 0; k < l.w*l.h*l.n; ++k){
float iou = box_iou(dets[k].bbox, t);
if(dets[k].objectness > thresh && iou > best_iou){
best_iou = iou;
}
}
avg_iou += best_iou;
if(best_iou > iou_thresh){
++correct;
}
}
fprintf(stderr, "%5d %5d %5d\tRPs/Img: %.2f\tIOU: %.2f%%\tRecall:%.2f%%\n", i, correct, total, (float)proposals/(i+1), avg_iou*100/total, 100.*correct/total);
free(id);
free_image(orig);
free_image(sized);
}
}
void test_detector(char *datacfg, char *cfgfile, char *weightfile, char *filename, float thresh, float hier_thresh, char *outfile, int fullscreen)
{
list *options = read_data_cfg(datacfg);
char *name_list = option_find_str(options, "names", "data/names.list");
char **names = get_labels(name_list);
image **alphabet = load_alphabet();
network *net = load_network(cfgfile, weightfile, 0);
set_batch_network(net, 1);
srand(2222222);
double time;
char buff[256];
char *input = buff;
float nms=.45;
while(1){
if(filename){
strncpy(input, filename, 256);
} else {
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input,0,0);
image sized = letterbox_image(im, net->w, net->h);
//image sized = resize_image(im, net->w, net->h);
//image sized2 = resize_max(im, net->w);
//image sized = crop_image(sized2, -((net->w - sized2.w)/2), -((net->h - sized2.h)/2), net->w, net->h);
//resize_network(net, sized.w, sized.h);
layer l = net->layers[net->n-1];
float *X = sized.data;
time=what_time_is_it_now();
network_predict(net, X);
printf("%s: Predicted in %f seconds.\n", input, what_time_is_it_now()-time);
int nboxes = 0;
detection *dets = get_network_boxes(net, im.w, im.h, thresh, hier_thresh, 0, 1, &nboxes);
//printf("%d\n", nboxes);
//if (nms) do_nms_obj(boxes, probs, l.w*l.h*l.n, l.classes, nms);
if (nms) do_nms_sort(dets, nboxes, l.classes, nms);
draw_detections(im, dets, nboxes, thresh, names, alphabet, l.classes);
free_detections(dets, nboxes);
if(outfile){
save_image(im, outfile);
}
else{
save_image(im, "predictions");
#ifdef OPENCV
make_window("predictions", 512, 512, 0);
show_image(im, "predictions", 0);
#endif
}
free_image(im);
free_image(sized);
if (filename) break;
}
}
/*
void censor_detector(char *datacfg, char *cfgfile, char *weightfile, int cam_index, const char *filename, int class_id, float thresh, int skip)
{
#ifdef OPENCV
char *base = basecfg(cfgfile);
network *net = load_network(cfgfile, weightfile, 0);
set_batch_network(net, 1);
srand(2222222);
CvCapture * cap;
int w = 1280;
int h = 720;
if(filename){
cap = cvCaptureFromFile(filename);
}else{
cap = cvCaptureFromCAM(cam_index);
}
if(w){
cvSetCaptureProperty(cap, CV_CAP_PROP_FRAME_WIDTH, w);
}
if(h){
cvSetCaptureProperty(cap, CV_CAP_PROP_FRAME_HEIGHT, h);
}
if(!cap) error("Couldn't connect to webcam.\n");
cvNamedWindow(base, CV_WINDOW_NORMAL);
cvResizeWindow(base, 512, 512);
float fps = 0;
int i;
float nms = .45;
while(1){
image in = get_image_from_stream(cap);
//image in_s = resize_image(in, net->w, net->h);
image in_s = letterbox_image(in, net->w, net->h);
layer l = net->layers[net->n-1];
float *X = in_s.data;
network_predict(net, X);
int nboxes = 0;
detection *dets = get_network_boxes(net, in.w, in.h, thresh, 0, 0, 0, &nboxes);
//if (nms) do_nms_obj(boxes, probs, l.w*l.h*l.n, l.classes, nms);
if (nms) do_nms_sort(dets, nboxes, l.classes, nms);
for(i = 0; i < nboxes; ++i){
if(dets[i].prob[class_id] > thresh){
box b = dets[i].bbox;
int left = b.x-b.w/2.;
int top = b.y-b.h/2.;
censor_image(in, left, top, b.w, b.h);
}
}
show_image(in, base);
cvWaitKey(10);
free_detections(dets, nboxes);
free_image(in_s);
free_image(in);
float curr = 0;
fps = .9*fps + .1*curr;
for(i = 0; i < skip; ++i){
image in = get_image_from_stream(cap);
free_image(in);
}
}
#endif
}
void extract_detector(char *datacfg, char *cfgfile, char *weightfile, int cam_index, const char *filename, int class_id, float thresh, int skip)
{
#ifdef OPENCV
char *base = basecfg(cfgfile);
network *net = load_network(cfgfile, weightfile, 0);
set_batch_network(net, 1);
srand(2222222);
CvCapture * cap;
int w = 1280;
int h = 720;
if(filename){
cap = cvCaptureFromFile(filename);
}else{
cap = cvCaptureFromCAM(cam_index);
}
if(w){
cvSetCaptureProperty(cap, CV_CAP_PROP_FRAME_WIDTH, w);
}
if(h){
cvSetCaptureProperty(cap, CV_CAP_PROP_FRAME_HEIGHT, h);
}
if(!cap) error("Couldn't connect to webcam.\n");
cvNamedWindow(base, CV_WINDOW_NORMAL);
cvResizeWindow(base, 512, 512);
float fps = 0;
int i;
int count = 0;
float nms = .45;
while(1){
image in = get_image_from_stream(cap);
//image in_s = resize_image(in, net->w, net->h);
image in_s = letterbox_image(in, net->w, net->h);
layer l = net->layers[net->n-1];
show_image(in, base);
int nboxes = 0;
float *X = in_s.data;
network_predict(net, X);
detection *dets = get_network_boxes(net, in.w, in.h, thresh, 0, 0, 1, &nboxes);
//if (nms) do_nms_obj(boxes, probs, l.w*l.h*l.n, l.classes, nms);
if (nms) do_nms_sort(dets, nboxes, l.classes, nms);
for(i = 0; i < nboxes; ++i){
if(dets[i].prob[class_id] > thresh){
box b = dets[i].bbox;
int size = b.w*in.w > b.h*in.h ? b.w*in.w : b.h*in.h;
int dx = b.x*in.w-size/2.;
int dy = b.y*in.h-size/2.;
image bim = crop_image(in, dx, dy, size, size);
char buff[2048];
sprintf(buff, "results/extract/%07d", count);
++count;
save_image(bim, buff);
free_image(bim);
}
}
free_detections(dets, nboxes);
free_image(in_s);
free_image(in);
float curr = 0;
fps = .9*fps + .1*curr;
for(i = 0; i < skip; ++i){
image in = get_image_from_stream(cap);
free_image(in);
}
}
#endif
}
*/
/*
void network_detect(network *net, image im, float thresh, float hier_thresh, float nms, detection *dets)
{
network_predict_image(net, im);
layer l = net->layers[net->n-1];
int nboxes = num_boxes(net);
fill_network_boxes(net, im.w, im.h, thresh, hier_thresh, 0, 0, dets);
if (nms) do_nms_sort(dets, nboxes, l.classes, nms);
}
*/
void run_detector(int argc, char **argv)
{
char *prefix = find_char_arg(argc, argv, "-prefix", 0);
float thresh = find_float_arg(argc, argv, "-thresh", .5);
float hier_thresh = find_float_arg(argc, argv, "-hier", .5);
int cam_index = find_int_arg(argc, argv, "-c", 0);
int frame_skip = find_int_arg(argc, argv, "-s", 0);
int avg = find_int_arg(argc, argv, "-avg", 3);
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *gpu_list = find_char_arg(argc, argv, "-gpus", 0);
char *outfile = find_char_arg(argc, argv, "-out", 0);
int *gpus = 0;
int gpu = 0;
int ngpus = 0;
if(gpu_list){
printf("%s\n", gpu_list);
int len = strlen(gpu_list);
ngpus = 1;
int i;
for(i = 0; i < len; ++i){
if (gpu_list[i] == ',') ++ngpus;
}
gpus = calloc(ngpus, sizeof(int));
for(i = 0; i < ngpus; ++i){
gpus[i] = atoi(gpu_list);
gpu_list = strchr(gpu_list, ',')+1;
}
} else {
gpu = gpu_index;
gpus = &gpu;
ngpus = 1;
}
int clear = find_arg(argc, argv, "-clear");
int fullscreen = find_arg(argc, argv, "-fullscreen");
int width = find_int_arg(argc, argv, "-w", 0);
int height = find_int_arg(argc, argv, "-h", 0);
int fps = find_int_arg(argc, argv, "-fps", 0);
//int class_id = find_int_arg(argc, argv, "-class", 0);
char *datacfg = argv[3];
char *cfg = argv[4];
char *weights = (argc > 5) ? argv[5] : 0;
char *filename = (argc > 6) ? argv[6]: 0;
if(0==strcmp(argv[2], "test")) test_detector(datacfg, cfg, weights, filename, thresh, hier_thresh, outfile, fullscreen);
else if(0==strcmp(argv[2], "train")) train_detector(datacfg, cfg, weights, gpus, ngpus, clear);
else if(0==strcmp(argv[2], "valid")) validate_detector(datacfg, cfg, weights, outfile);
else if(0==strcmp(argv[2], "valid2")) validate_detector_flip(datacfg, cfg, weights, outfile);
else if(0==strcmp(argv[2], "recall")) validate_detector_recall(cfg, weights);
else if(0==strcmp(argv[2], "demo")) {
list *options = read_data_cfg(datacfg);
int classes = option_find_int(options, "classes", 20);
char *name_list = option_find_str(options, "names", "data/names.list");
char **names = get_labels(name_list);
demo(cfg, weights, thresh, cam_index, filename, names, classes, frame_skip, prefix, avg, hier_thresh, width, height, fps, fullscreen);
}
//else if(0==strcmp(argv[2], "extract")) extract_detector(datacfg, cfg, weights, cam_index, filename, class_id, thresh, frame_skip);
//else if(0==strcmp(argv[2], "censor")) censor_detector(datacfg, cfg, weights, cam_index, filename, class_id, thresh, frame_skip);
}

View File

@ -0,0 +1,27 @@
# Stupid python path shit.
# Instead just add darknet.py to somewhere in your python path
# OK actually that might not be a great idea, idk, work in progress
# Use at your own risk. or don't, i don't care
import sys, os
sys.path.append(os.path.join(os.getcwd(),'python/'))
import darknet as dn
import pdb
dn.set_gpu(0)
net = dn.load_net("cfg/yolo-thor.cfg", "/home/pjreddie/backup/yolo-thor_final.weights", 0)
meta = dn.load_meta("cfg/thor.data")
r = dn.detect(net, meta, "data/bedroom.jpg")
print r
# And then down here you could detect a lot more images like:
r = dn.detect(net, meta, "data/eagle.jpg")
print r
r = dn.detect(net, meta, "data/giraffe.jpg")
print r
r = dn.detect(net, meta, "data/horses.jpg")
print r
r = dn.detect(net, meta, "data/person.jpg")
print r

View File

@ -0,0 +1,116 @@
#include "darknet.h"
char *dice_labels[] = {"face1","face2","face3","face4","face5","face6"};
void train_dice(char *cfgfile, char *weightfile)
{
srand(time(0));
float avg_loss = -1;
char *base = basecfg(cfgfile);
char *backup_directory = "/home/pjreddie/backup/";
printf("%s\n", base);
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net.learning_rate, net.momentum, net.decay);
int imgs = 1024;
int i = *net.seen/imgs;
char **labels = dice_labels;
list *plist = get_paths("data/dice/dice.train.list");
char **paths = (char **)list_to_array(plist);
printf("%d\n", plist->size);
clock_t time;
while(1){
++i;
time=clock();
data train = load_data_old(paths, imgs, plist->size, labels, 6, net.w, net.h);
printf("Loaded: %lf seconds\n", sec(clock()-time));
time=clock();
float loss = train_network(net, train);
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%d: %f, %f avg, %lf seconds, %ld images\n", i, loss, avg_loss, sec(clock()-time), *net.seen);
free_data(train);
if((i % 100) == 0) net.learning_rate *= .1;
if(i%100==0){
char buff[256];
sprintf(buff, "%s/%s_%d.weights",backup_directory,base, i);
save_weights(net, buff);
}
}
}
void validate_dice(char *filename, char *weightfile)
{
network net = parse_network_cfg(filename);
if(weightfile){
load_weights(&net, weightfile);
}
srand(time(0));
char **labels = dice_labels;
list *plist = get_paths("data/dice/dice.val.list");
char **paths = (char **)list_to_array(plist);
int m = plist->size;
free_list(plist);
data val = load_data_old(paths, m, 0, labels, 6, net.w, net.h);
float *acc = network_accuracies(net, val, 2);
printf("Validation Accuracy: %f, %d images\n", acc[0], m);
free_data(val);
}
void test_dice(char *cfgfile, char *weightfile, char *filename)
{
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
set_batch_network(&net, 1);
srand(2222222);
int i = 0;
char **names = dice_labels;
char buff[256];
char *input = buff;
int indexes[6];
while(1){
if(filename){
strncpy(input, filename, 256);
}else{
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input, net.w, net.h);
float *X = im.data;
float *predictions = network_predict(net, X);
top_predictions(net, 6, indexes);
for(i = 0; i < 6; ++i){
int index = indexes[i];
printf("%s: %f\n", names[index], predictions[index]);
}
free_image(im);
if (filename) break;
}
}
void run_dice(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
char *filename = (argc > 5) ? argv[5]: 0;
if(0==strcmp(argv[2], "test")) test_dice(cfg, weights, filename);
else if(0==strcmp(argv[2], "train")) train_dice(cfg, weights);
else if(0==strcmp(argv[2], "valid")) validate_dice(cfg, weights);
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,267 @@
#include "darknet.h"
#include <sys/time.h>
#include <assert.h>
void normalize_image2(image p);
void train_isegmenter(char *datacfg, char *cfgfile, char *weightfile, int *gpus, int ngpus, int clear, int display)
{
int i;
float avg_loss = -1;
char *base = basecfg(cfgfile);
printf("%s\n", base);
printf("%d\n", ngpus);
network **nets = calloc(ngpus, sizeof(network*));
srand(time(0));
int seed = rand();
for(i = 0; i < ngpus; ++i){
srand(seed);
#ifdef GPU
cuda_set_device(gpus[i]);
#endif
nets[i] = load_network(cfgfile, weightfile, clear);
nets[i]->learning_rate *= ngpus;
}
srand(time(0));
network *net = nets[0];
image pred = get_network_image(net);
image embed = pred;
embed.c = 3;
embed.data += embed.w*embed.h*80;
int div = net->w/pred.w;
assert(pred.w * div == net->w);
assert(pred.h * div == net->h);
int imgs = net->batch * net->subdivisions * ngpus;
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
list *options = read_data_cfg(datacfg);
char *backup_directory = option_find_str(options, "backup", "/backup/");
char *train_list = option_find_str(options, "train", "data/train.list");
list *plist = get_paths(train_list);
char **paths = (char **)list_to_array(plist);
printf("%d\n", plist->size);
int N = plist->size;
load_args args = {0};
args.w = net->w;
args.h = net->h;
args.threads = 32;
args.scale = div;
args.num_boxes = 90;
args.min = net->min_crop;
args.max = net->max_crop;
args.angle = net->angle;
args.aspect = net->aspect;
args.exposure = net->exposure;
args.saturation = net->saturation;
args.hue = net->hue;
args.size = net->w;
args.classes = 80;
args.paths = paths;
args.n = imgs;
args.m = N;
args.type = ISEG_DATA;
data train;
data buffer;
pthread_t load_thread;
args.d = &buffer;
load_thread = load_data(args);
int epoch = (*net->seen)/N;
while(get_current_batch(net) < net->max_batches || net->max_batches == 0){
double time = what_time_is_it_now();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data(args);
printf("Loaded: %lf seconds\n", what_time_is_it_now()-time);
time = what_time_is_it_now();
float loss = 0;
#ifdef GPU
if(ngpus == 1){
loss = train_network(net, train);
} else {
loss = train_networks(nets, ngpus, train, 4);
}
#else
loss = train_network(net, train);
#endif
if(display){
image tr = float_to_image(net->w/div, net->h/div, 80, train.y.vals[net->batch*(net->subdivisions-1)]);
image im = float_to_image(net->w, net->h, net->c, train.X.vals[net->batch*(net->subdivisions-1)]);
pred.c = 80;
image mask = mask_to_rgb(tr);
image prmask = mask_to_rgb(pred);
image ecopy = copy_image(embed);
normalize_image2(ecopy);
show_image(ecopy, "embed", 1);
free_image(ecopy);
show_image(im, "input", 1);
show_image(prmask, "pred", 1);
show_image(mask, "truth", 100);
free_image(mask);
free_image(prmask);
}
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%ld, %.3f: %f, %f avg, %f rate, %lf seconds, %ld images\n", get_current_batch(net), (float)(*net->seen)/N, loss, avg_loss, get_current_rate(net), what_time_is_it_now()-time, *net->seen);
free_data(train);
if(*net->seen/N > epoch){
epoch = *net->seen/N;
char buff[256];
sprintf(buff, "%s/%s_%d.weights",backup_directory,base, epoch);
save_weights(net, buff);
}
if(get_current_batch(net)%100 == 0){
char buff[256];
sprintf(buff, "%s/%s.backup",backup_directory,base);
save_weights(net, buff);
}
}
char buff[256];
sprintf(buff, "%s/%s.weights", backup_directory, base);
save_weights(net, buff);
free_network(net);
free_ptrs((void**)paths, plist->size);
free_list(plist);
free(base);
}
void predict_isegmenter(char *datafile, char *cfg, char *weights, char *filename)
{
network *net = load_network(cfg, weights, 0);
set_batch_network(net, 1);
srand(2222222);
clock_t time;
char buff[256];
char *input = buff;
while(1){
if(filename){
strncpy(input, filename, 256);
}else{
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input, 0, 0);
image sized = letterbox_image(im, net->w, net->h);
float *X = sized.data;
time=clock();
float *predictions = network_predict(net, X);
image pred = get_network_image(net);
image prmask = mask_to_rgb(pred);
printf("Predicted: %f\n", predictions[0]);
printf("%s: Predicted in %f seconds.\n", input, sec(clock()-time));
show_image(sized, "orig", 1);
show_image(prmask, "pred", 0);
free_image(im);
free_image(sized);
free_image(prmask);
if (filename) break;
}
}
void demo_isegmenter(char *datacfg, char *cfg, char *weights, int cam_index, const char *filename)
{
#ifdef OPENCV
printf("Classifier Demo\n");
network *net = load_network(cfg, weights, 0);
set_batch_network(net, 1);
srand(2222222);
void * cap = open_video_stream(filename, cam_index, 0,0,0);
if(!cap) error("Couldn't connect to webcam.\n");
float fps = 0;
while(1){
struct timeval tval_before, tval_after, tval_result;
gettimeofday(&tval_before, NULL);
image in = get_image_from_stream(cap);
image in_s = letterbox_image(in, net->w, net->h);
network_predict(net, in_s.data);
printf("\033[2J");
printf("\033[1;1H");
printf("\nFPS:%.0f\n",fps);
image pred = get_network_image(net);
image prmask = mask_to_rgb(pred);
show_image(prmask, "Segmenter", 10);
free_image(in_s);
free_image(in);
free_image(prmask);
gettimeofday(&tval_after, NULL);
timersub(&tval_after, &tval_before, &tval_result);
float curr = 1000000.f/((long int)tval_result.tv_usec);
fps = .9*fps + .1*curr;
}
#endif
}
void run_isegmenter(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *gpu_list = find_char_arg(argc, argv, "-gpus", 0);
int *gpus = 0;
int gpu = 0;
int ngpus = 0;
if(gpu_list){
printf("%s\n", gpu_list);
int len = strlen(gpu_list);
ngpus = 1;
int i;
for(i = 0; i < len; ++i){
if (gpu_list[i] == ',') ++ngpus;
}
gpus = calloc(ngpus, sizeof(int));
for(i = 0; i < ngpus; ++i){
gpus[i] = atoi(gpu_list);
gpu_list = strchr(gpu_list, ',')+1;
}
} else {
gpu = gpu_index;
gpus = &gpu;
ngpus = 1;
}
int cam_index = find_int_arg(argc, argv, "-c", 0);
int clear = find_arg(argc, argv, "-clear");
int display = find_arg(argc, argv, "-display");
char *data = argv[3];
char *cfg = argv[4];
char *weights = (argc > 5) ? argv[5] : 0;
char *filename = (argc > 6) ? argv[6]: 0;
if(0==strcmp(argv[2], "test")) predict_isegmenter(data, cfg, weights, filename);
else if(0==strcmp(argv[2], "train")) train_isegmenter(data, cfg, weights, gpus, ngpus, clear, display);
else if(0==strcmp(argv[2], "demo")) demo_isegmenter(data, cfg, weights, cam_index, filename);
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,414 @@
#include "darknet.h"
#include <math.h>
// ./darknet nightmare cfg/extractor.recon.cfg ~/trained/yolo-coco.conv frame6.png -reconstruct -iters 500 -i 3 -lambda .1 -rate .01 -smooth 2
float abs_mean(float *x, int n)
{
int i;
float sum = 0;
for (i = 0; i < n; ++i){
sum += fabs(x[i]);
}
return sum/n;
}
void calculate_loss(float *output, float *delta, int n, float thresh)
{
int i;
float mean = mean_array(output, n);
float var = variance_array(output, n);
for(i = 0; i < n; ++i){
if(delta[i] > mean + thresh*sqrt(var)) delta[i] = output[i];
else delta[i] = 0;
}
}
void optimize_picture(network *net, image orig, int max_layer, float scale, float rate, float thresh, int norm)
{
//scale_image(orig, 2);
//translate_image(orig, -1);
net->n = max_layer + 1;
int dx = rand()%16 - 8;
int dy = rand()%16 - 8;
int flip = rand()%2;
image crop = crop_image(orig, dx, dy, orig.w, orig.h);
image im = resize_image(crop, (int)(orig.w * scale), (int)(orig.h * scale));
if(flip) flip_image(im);
resize_network(net, im.w, im.h);
layer last = net->layers[net->n-1];
//net->layers[net->n - 1].activation = LINEAR;
image delta = make_image(im.w, im.h, im.c);
#ifdef GPU
net->delta_gpu = cuda_make_array(delta.data, im.w*im.h*im.c);
copy_cpu(net->inputs, im.data, 1, net->input, 1);
forward_network_gpu(net);
copy_gpu(last.outputs, last.output_gpu, 1, last.delta_gpu, 1);
cuda_pull_array(last.delta_gpu, last.delta, last.outputs);
calculate_loss(last.delta, last.delta, last.outputs, thresh);
cuda_push_array(last.delta_gpu, last.delta, last.outputs);
backward_network_gpu(net);
cuda_pull_array(net->delta_gpu, delta.data, im.w*im.h*im.c);
cuda_free(net->delta_gpu);
net->delta_gpu = 0;
#else
printf("\nnet: %d %d %d im: %d %d %d\n", net->w, net->h, net->inputs, im.w, im.h, im.c);
copy_cpu(net->inputs, im.data, 1, net->input, 1);
net->delta = delta.data;
forward_network(net);
copy_cpu(last.outputs, last.output, 1, last.delta, 1);
calculate_loss(last.output, last.delta, last.outputs, thresh);
backward_network(net);
#endif
if(flip) flip_image(delta);
//normalize_array(delta.data, delta.w*delta.h*delta.c);
image resized = resize_image(delta, orig.w, orig.h);
image out = crop_image(resized, -dx, -dy, orig.w, orig.h);
/*
image g = grayscale_image(out);
free_image(out);
out = g;
*/
//rate = rate / abs_mean(out.data, out.w*out.h*out.c);
image gray = make_image(out.w, out.h, out.c);
fill_image(gray, .5);
axpy_cpu(orig.w*orig.h*orig.c, -1, orig.data, 1, gray.data, 1);
axpy_cpu(orig.w*orig.h*orig.c, .1, gray.data, 1, out.data, 1);
if(norm) normalize_array(out.data, out.w*out.h*out.c);
axpy_cpu(orig.w*orig.h*orig.c, rate, out.data, 1, orig.data, 1);
/*
normalize_array(orig.data, orig.w*orig.h*orig.c);
scale_image(orig, sqrt(var));
translate_image(orig, mean);
*/
//translate_image(orig, 1);
//scale_image(orig, .5);
//normalize_image(orig);
constrain_image(orig);
free_image(crop);
free_image(im);
free_image(delta);
free_image(resized);
free_image(out);
}
void smooth(image recon, image update, float lambda, int num)
{
int i, j, k;
int ii, jj;
for(k = 0; k < recon.c; ++k){
for(j = 0; j < recon.h; ++j){
for(i = 0; i < recon.w; ++i){
int out_index = i + recon.w*(j + recon.h*k);
for(jj = j-num; jj <= j + num && jj < recon.h; ++jj){
if (jj < 0) continue;
for(ii = i-num; ii <= i + num && ii < recon.w; ++ii){
if (ii < 0) continue;
int in_index = ii + recon.w*(jj + recon.h*k);
update.data[out_index] += lambda * (recon.data[in_index] - recon.data[out_index]);
}
}
}
}
}
}
void reconstruct_picture(network *net, float *features, image recon, image update, float rate, float momentum, float lambda, int smooth_size, int iters)
{
int iter = 0;
for (iter = 0; iter < iters; ++iter) {
image delta = make_image(recon.w, recon.h, recon.c);
#ifdef GPU
layer l = get_network_output_layer(net);
cuda_push_array(net->input_gpu, recon.data, recon.w*recon.h*recon.c);
//cuda_push_array(net->truth_gpu, features, net->truths);
net->delta_gpu = cuda_make_array(delta.data, delta.w*delta.h*delta.c);
forward_network_gpu(net);
cuda_push_array(l.delta_gpu, features, l.outputs);
axpy_gpu(l.outputs, -1, l.output_gpu, 1, l.delta_gpu, 1);
backward_network_gpu(net);
cuda_pull_array(net->delta_gpu, delta.data, delta.w*delta.h*delta.c);
cuda_free(net->delta_gpu);
#else
net->input = recon.data;
net->delta = delta.data;
net->truth = features;
forward_network(net);
backward_network(net);
#endif
//normalize_array(delta.data, delta.w*delta.h*delta.c);
axpy_cpu(recon.w*recon.h*recon.c, 1, delta.data, 1, update.data, 1);
//smooth(recon, update, lambda, smooth_size);
axpy_cpu(recon.w*recon.h*recon.c, rate, update.data, 1, recon.data, 1);
scal_cpu(recon.w*recon.h*recon.c, momentum, update.data, 1);
float mag = mag_array(delta.data, recon.w*recon.h*recon.c);
printf("mag: %f\n", mag);
//scal_cpu(recon.w*recon.h*recon.c, 600/mag, recon.data, 1);
constrain_image(recon);
free_image(delta);
}
}
/*
void run_lsd(int argc, char **argv)
{
srand(0);
if(argc < 3){
fprintf(stderr, "usage: %s %s [cfg] [weights] [image] [options! (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[2];
char *weights = argv[3];
char *input = argv[4];
int norm = find_int_arg(argc, argv, "-norm", 1);
int rounds = find_int_arg(argc, argv, "-rounds", 1);
int iters = find_int_arg(argc, argv, "-iters", 10);
float rate = find_float_arg(argc, argv, "-rate", .04);
float momentum = find_float_arg(argc, argv, "-momentum", .9);
float lambda = find_float_arg(argc, argv, "-lambda", .01);
char *prefix = find_char_arg(argc, argv, "-prefix", 0);
int reconstruct = find_arg(argc, argv, "-reconstruct");
int smooth_size = find_int_arg(argc, argv, "-smooth", 1);
network net = parse_network_cfg(cfg);
load_weights(&net, weights);
char *cfgbase = basecfg(cfg);
char *imbase = basecfg(input);
set_batch_network(&net, 1);
image im = load_image_color(input, 0, 0);
float *features = 0;
image update;
if (reconstruct){
im = letterbox_image(im, net->w, net->h);
int zz = 0;
network_predict(net, im.data);
image out_im = get_network_image(net);
image crop = crop_image(out_im, zz, zz, out_im.w-2*zz, out_im.h-2*zz);
//flip_image(crop);
image f_im = resize_image(crop, out_im.w, out_im.h);
free_image(crop);
printf("%d features\n", out_im.w*out_im.h*out_im.c);
im = resize_image(im, im.w, im.h);
f_im = resize_image(f_im, f_im.w, f_im.h);
features = f_im.data;
int i;
for(i = 0; i < 14*14*512; ++i){
features[i] += rand_uniform(-.19, .19);
}
free_image(im);
im = make_random_image(im.w, im.h, im.c);
update = make_image(im.w, im.h, im.c);
}
int e;
int n;
for(e = 0; e < rounds; ++e){
fprintf(stderr, "Iteration: ");
fflush(stderr);
for(n = 0; n < iters; ++n){
fprintf(stderr, "%d, ", n);
fflush(stderr);
if(reconstruct){
reconstruct_picture(net, features, im, update, rate, momentum, lambda, smooth_size, 1);
//if ((n+1)%30 == 0) rate *= .5;
show_image(im, "reconstruction");
#ifdef OPENCV
cvWaitKey(10);
#endif
}else{
int layer = max_layer + rand()%range - range/2;
int octave = rand()%octaves;
optimize_picture(&net, im, layer, 1/pow(1.33333333, octave), rate, thresh, norm);
}
}
fprintf(stderr, "done\n");
char buff[256];
if (prefix){
sprintf(buff, "%s/%s_%s_%d_%06d",prefix, imbase, cfgbase, max_layer, e);
}else{
sprintf(buff, "%s_%s_%d_%06d",imbase, cfgbase, max_layer, e);
}
printf("%d %s\n", e, buff);
save_image(im, buff);
//show_image(im, buff);
//cvWaitKey(0);
if(rotate){
image rot = rotate_image(im, rotate);
free_image(im);
im = rot;
}
image crop = crop_image(im, im.w * (1. - zoom)/2., im.h * (1.-zoom)/2., im.w*zoom, im.h*zoom);
image resized = resize_image(crop, im.w, im.h);
free_image(im);
free_image(crop);
im = resized;
}
}
*/
void run_nightmare(int argc, char **argv)
{
srand(0);
if(argc < 4){
fprintf(stderr, "usage: %s %s [cfg] [weights] [image] [layer] [options! (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[2];
char *weights = argv[3];
char *input = argv[4];
int max_layer = atoi(argv[5]);
int range = find_int_arg(argc, argv, "-range", 1);
int norm = find_int_arg(argc, argv, "-norm", 1);
int rounds = find_int_arg(argc, argv, "-rounds", 1);
int iters = find_int_arg(argc, argv, "-iters", 10);
int octaves = find_int_arg(argc, argv, "-octaves", 4);
float zoom = find_float_arg(argc, argv, "-zoom", 1.);
float rate = find_float_arg(argc, argv, "-rate", .04);
float thresh = find_float_arg(argc, argv, "-thresh", 1.);
float rotate = find_float_arg(argc, argv, "-rotate", 0);
float momentum = find_float_arg(argc, argv, "-momentum", .9);
float lambda = find_float_arg(argc, argv, "-lambda", .01);
char *prefix = find_char_arg(argc, argv, "-prefix", 0);
int reconstruct = find_arg(argc, argv, "-reconstruct");
int smooth_size = find_int_arg(argc, argv, "-smooth", 1);
network *net = load_network(cfg, weights, 0);
char *cfgbase = basecfg(cfg);
char *imbase = basecfg(input);
set_batch_network(net, 1);
image im = load_image_color(input, 0, 0);
if(0){
float scale = 1;
if(im.w > 512 || im.h > 512){
if(im.w > im.h) scale = 512.0/im.w;
else scale = 512.0/im.h;
}
image resized = resize_image(im, scale*im.w, scale*im.h);
free_image(im);
im = resized;
}
//im = letterbox_image(im, net->w, net->h);
float *features = 0;
image update;
if (reconstruct){
net->n = max_layer;
im = letterbox_image(im, net->w, net->h);
//resize_network(&net, im.w, im.h);
network_predict(net, im.data);
if(net->layers[net->n-1].type == REGION){
printf("region!\n");
zero_objectness(net->layers[net->n-1]);
}
image out_im = copy_image(get_network_image(net));
/*
image crop = crop_image(out_im, zz, zz, out_im.w-2*zz, out_im.h-2*zz);
//flip_image(crop);
image f_im = resize_image(crop, out_im.w, out_im.h);
free_image(crop);
*/
printf("%d features\n", out_im.w*out_im.h*out_im.c);
features = out_im.data;
/*
int i;
for(i = 0; i < 14*14*512; ++i){
//features[i] += rand_uniform(-.19, .19);
}
free_image(im);
im = make_random_image(im.w, im.h, im.c);
*/
update = make_image(im.w, im.h, im.c);
}
int e;
int n;
for(e = 0; e < rounds; ++e){
fprintf(stderr, "Iteration: ");
fflush(stderr);
for(n = 0; n < iters; ++n){
fprintf(stderr, "%d, ", n);
fflush(stderr);
if(reconstruct){
reconstruct_picture(net, features, im, update, rate, momentum, lambda, smooth_size, 1);
//if ((n+1)%30 == 0) rate *= .5;
show_image(im, "reconstruction", 10);
}else{
int layer = max_layer + rand()%range - range/2;
int octave = rand()%octaves;
optimize_picture(net, im, layer, 1/pow(1.33333333, octave), rate, thresh, norm);
}
}
fprintf(stderr, "done\n");
if(0){
image g = grayscale_image(im);
free_image(im);
im = g;
}
char buff[256];
if (prefix){
sprintf(buff, "%s/%s_%s_%d_%06d",prefix, imbase, cfgbase, max_layer, e);
}else{
sprintf(buff, "%s_%s_%d_%06d",imbase, cfgbase, max_layer, e);
}
printf("%d %s\n", e, buff);
save_image(im, buff);
//show_image(im, buff, 0);
if(rotate){
image rot = rotate_image(im, rotate);
free_image(im);
im = rot;
}
image crop = crop_image(im, im.w * (1. - zoom)/2., im.h * (1.-zoom)/2., im.w*zoom, im.h*zoom);
image resized = resize_image(crop, im.w, im.h);
free_image(im);
free_image(crop);
im = resized;
}
}

View File

@ -0,0 +1,240 @@
#include "darknet.h"
#include <sys/time.h>
#include <assert.h>
void train_regressor(char *datacfg, char *cfgfile, char *weightfile, int *gpus, int ngpus, int clear)
{
int i;
float avg_loss = -1;
char *base = basecfg(cfgfile);
printf("%s\n", base);
printf("%d\n", ngpus);
network **nets = calloc(ngpus, sizeof(network*));
srand(time(0));
int seed = rand();
for(i = 0; i < ngpus; ++i){
srand(seed);
#ifdef GPU
cuda_set_device(gpus[i]);
#endif
nets[i] = load_network(cfgfile, weightfile, clear);
nets[i]->learning_rate *= ngpus;
}
srand(time(0));
network *net = nets[0];
int imgs = net->batch * net->subdivisions * ngpus;
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
list *options = read_data_cfg(datacfg);
char *backup_directory = option_find_str(options, "backup", "/backup/");
char *train_list = option_find_str(options, "train", "data/train.list");
int classes = option_find_int(options, "classes", 1);
list *plist = get_paths(train_list);
char **paths = (char **)list_to_array(plist);
printf("%d\n", plist->size);
int N = plist->size;
clock_t time;
load_args args = {0};
args.w = net->w;
args.h = net->h;
args.threads = 32;
args.classes = classes;
args.min = net->min_ratio*net->w;
args.max = net->max_ratio*net->w;
args.angle = net->angle;
args.aspect = net->aspect;
args.exposure = net->exposure;
args.saturation = net->saturation;
args.hue = net->hue;
args.size = net->w;
args.paths = paths;
args.n = imgs;
args.m = N;
args.type = REGRESSION_DATA;
data train;
data buffer;
pthread_t load_thread;
args.d = &buffer;
load_thread = load_data(args);
int epoch = (*net->seen)/N;
while(get_current_batch(net) < net->max_batches || net->max_batches == 0){
time=clock();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data(args);
printf("Loaded: %lf seconds\n", sec(clock()-time));
time=clock();
float loss = 0;
#ifdef GPU
if(ngpus == 1){
loss = train_network(net, train);
} else {
loss = train_networks(nets, ngpus, train, 4);
}
#else
loss = train_network(net, train);
#endif
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%ld, %.3f: %f, %f avg, %f rate, %lf seconds, %ld images\n", get_current_batch(net), (float)(*net->seen)/N, loss, avg_loss, get_current_rate(net), sec(clock()-time), *net->seen);
free_data(train);
if(*net->seen/N > epoch){
epoch = *net->seen/N;
char buff[256];
sprintf(buff, "%s/%s_%d.weights",backup_directory,base, epoch);
save_weights(net, buff);
}
if(get_current_batch(net)%100 == 0){
char buff[256];
sprintf(buff, "%s/%s.backup",backup_directory,base);
save_weights(net, buff);
}
}
char buff[256];
sprintf(buff, "%s/%s.weights", backup_directory, base);
save_weights(net, buff);
free_network(net);
free_ptrs((void**)paths, plist->size);
free_list(plist);
free(base);
}
void predict_regressor(char *cfgfile, char *weightfile, char *filename)
{
network *net = load_network(cfgfile, weightfile, 0);
set_batch_network(net, 1);
srand(2222222);
clock_t time;
char buff[256];
char *input = buff;
while(1){
if(filename){
strncpy(input, filename, 256);
}else{
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input, 0, 0);
image sized = letterbox_image(im, net->w, net->h);
float *X = sized.data;
time=clock();
float *predictions = network_predict(net, X);
printf("Predicted: %f\n", predictions[0]);
printf("%s: Predicted in %f seconds.\n", input, sec(clock()-time));
free_image(im);
free_image(sized);
if (filename) break;
}
}
void demo_regressor(char *datacfg, char *cfgfile, char *weightfile, int cam_index, const char *filename)
{
#ifdef OPENCV
printf("Regressor Demo\n");
network *net = load_network(cfgfile, weightfile, 0);
set_batch_network(net, 1);
srand(2222222);
list *options = read_data_cfg(datacfg);
int classes = option_find_int(options, "classes", 1);
char *name_list = option_find_str(options, "names", 0);
char **names = get_labels(name_list);
void * cap = open_video_stream(filename, cam_index, 0,0,0);
if(!cap) error("Couldn't connect to webcam.\n");
float fps = 0;
while(1){
struct timeval tval_before, tval_after, tval_result;
gettimeofday(&tval_before, NULL);
image in = get_image_from_stream(cap);
image crop = center_crop_image(in, net->w, net->h);
grayscale_image_3c(crop);
float *predictions = network_predict(net, crop.data);
printf("\033[2J");
printf("\033[1;1H");
printf("\nFPS:%.0f\n",fps);
int i;
for(i = 0; i < classes; ++i){
printf("%s: %f\n", names[i], predictions[i]);
}
show_image(crop, "Regressor", 10);
free_image(in);
free_image(crop);
gettimeofday(&tval_after, NULL);
timersub(&tval_after, &tval_before, &tval_result);
float curr = 1000000.f/((long int)tval_result.tv_usec);
fps = .9*fps + .1*curr;
}
#endif
}
void run_regressor(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *gpu_list = find_char_arg(argc, argv, "-gpus", 0);
int *gpus = 0;
int gpu = 0;
int ngpus = 0;
if(gpu_list){
printf("%s\n", gpu_list);
int len = strlen(gpu_list);
ngpus = 1;
int i;
for(i = 0; i < len; ++i){
if (gpu_list[i] == ',') ++ngpus;
}
gpus = calloc(ngpus, sizeof(int));
for(i = 0; i < ngpus; ++i){
gpus[i] = atoi(gpu_list);
gpu_list = strchr(gpu_list, ',')+1;
}
} else {
gpu = gpu_index;
gpus = &gpu;
ngpus = 1;
}
int cam_index = find_int_arg(argc, argv, "-c", 0);
int clear = find_arg(argc, argv, "-clear");
char *data = argv[3];
char *cfg = argv[4];
char *weights = (argc > 5) ? argv[5] : 0;
char *filename = (argc > 6) ? argv[6]: 0;
if(0==strcmp(argv[2], "test")) predict_regressor(data, cfg, weights);
else if(0==strcmp(argv[2], "train")) train_regressor(data, cfg, weights, gpus, ngpus, clear);
else if(0==strcmp(argv[2], "demo")) demo_regressor(data, cfg, weights, cam_index, filename);
}

View File

@ -0,0 +1,542 @@
#include "darknet.h"
#include <math.h>
typedef struct {
float *x;
float *y;
} float_pair;
unsigned char **load_files(char *filename, int *n)
{
list *paths = get_paths(filename);
*n = paths->size;
unsigned char **contents = calloc(*n, sizeof(char *));
int i;
node *x = paths->front;
for(i = 0; i < *n; ++i){
contents[i] = read_file((char *)x->val);
x = x->next;
}
return contents;
}
int *read_tokenized_data(char *filename, size_t *read)
{
size_t size = 512;
size_t count = 0;
FILE *fp = fopen(filename, "r");
int *d = calloc(size, sizeof(int));
int n, one;
one = fscanf(fp, "%d", &n);
while(one == 1){
++count;
if(count > size){
size = size*2;
d = realloc(d, size*sizeof(int));
}
d[count-1] = n;
one = fscanf(fp, "%d", &n);
}
fclose(fp);
d = realloc(d, count*sizeof(int));
*read = count;
return d;
}
char **read_tokens(char *filename, size_t *read)
{
size_t size = 512;
size_t count = 0;
FILE *fp = fopen(filename, "r");
char **d = calloc(size, sizeof(char *));
char *line;
while((line=fgetl(fp)) != 0){
++count;
if(count > size){
size = size*2;
d = realloc(d, size*sizeof(char *));
}
if(0==strcmp(line, "<NEWLINE>")) line = "\n";
d[count-1] = line;
}
fclose(fp);
d = realloc(d, count*sizeof(char *));
*read = count;
return d;
}
float_pair get_rnn_token_data(int *tokens, size_t *offsets, int characters, size_t len, int batch, int steps)
{
float *x = calloc(batch * steps * characters, sizeof(float));
float *y = calloc(batch * steps * characters, sizeof(float));
int i,j;
for(i = 0; i < batch; ++i){
for(j = 0; j < steps; ++j){
int curr = tokens[(offsets[i])%len];
int next = tokens[(offsets[i] + 1)%len];
x[(j*batch + i)*characters + curr] = 1;
y[(j*batch + i)*characters + next] = 1;
offsets[i] = (offsets[i] + 1) % len;
if(curr >= characters || curr < 0 || next >= characters || next < 0){
error("Bad char");
}
}
}
float_pair p;
p.x = x;
p.y = y;
return p;
}
float_pair get_seq2seq_data(char **source, char **dest, int n, int characters, size_t len, int batch, int steps)
{
int i,j;
float *x = calloc(batch * steps * characters, sizeof(float));
float *y = calloc(batch * steps * characters, sizeof(float));
for(i = 0; i < batch; ++i){
int index = rand()%n;
//int slen = strlen(source[index]);
//int dlen = strlen(dest[index]);
for(j = 0; j < steps; ++j){
unsigned char curr = source[index][j];
unsigned char next = dest[index][j];
x[(j*batch + i)*characters + curr] = 1;
y[(j*batch + i)*characters + next] = 1;
if(curr > 255 || curr <= 0 || next > 255 || next <= 0){
/*text[(index+j+2)%len] = 0;
printf("%ld %d %d %d %d\n", index, j, len, (int)text[index+j], (int)text[index+j+1]);
printf("%s", text+index);
*/
error("Bad char");
}
}
}
float_pair p;
p.x = x;
p.y = y;
return p;
}
float_pair get_rnn_data(unsigned char *text, size_t *offsets, int characters, size_t len, int batch, int steps)
{
float *x = calloc(batch * steps * characters, sizeof(float));
float *y = calloc(batch * steps * characters, sizeof(float));
int i,j;
for(i = 0; i < batch; ++i){
for(j = 0; j < steps; ++j){
unsigned char curr = text[(offsets[i])%len];
unsigned char next = text[(offsets[i] + 1)%len];
x[(j*batch + i)*characters + curr] = 1;
y[(j*batch + i)*characters + next] = 1;
offsets[i] = (offsets[i] + 1) % len;
if(curr > 255 || curr <= 0 || next > 255 || next <= 0){
/*text[(index+j+2)%len] = 0;
printf("%ld %d %d %d %d\n", index, j, len, (int)text[index+j], (int)text[index+j+1]);
printf("%s", text+index);
*/
error("Bad char");
}
}
}
float_pair p;
p.x = x;
p.y = y;
return p;
}
void train_char_rnn(char *cfgfile, char *weightfile, char *filename, int clear, int tokenized)
{
srand(time(0));
unsigned char *text = 0;
int *tokens = 0;
size_t size;
if(tokenized){
tokens = read_tokenized_data(filename, &size);
} else {
text = read_file(filename);
size = strlen((const char*)text);
}
char *backup_directory = "/home/pjreddie/backup/";
char *base = basecfg(cfgfile);
fprintf(stderr, "%s\n", base);
float avg_loss = -1;
network *net = load_network(cfgfile, weightfile, clear);
int inputs = net->inputs;
fprintf(stderr, "Learning Rate: %g, Momentum: %g, Decay: %g, Inputs: %d %d %d\n", net->learning_rate, net->momentum, net->decay, inputs, net->batch, net->time_steps);
int batch = net->batch;
int steps = net->time_steps;
if(clear) *net->seen = 0;
int i = (*net->seen)/net->batch;
int streams = batch/steps;
size_t *offsets = calloc(streams, sizeof(size_t));
int j;
for(j = 0; j < streams; ++j){
offsets[j] = rand_size_t()%size;
}
clock_t time;
while(get_current_batch(net) < net->max_batches){
i += 1;
time=clock();
float_pair p;
if(tokenized){
p = get_rnn_token_data(tokens, offsets, inputs, size, streams, steps);
}else{
p = get_rnn_data(text, offsets, inputs, size, streams, steps);
}
copy_cpu(net->inputs*net->batch, p.x, 1, net->input, 1);
copy_cpu(net->truths*net->batch, p.y, 1, net->truth, 1);
float loss = train_network_datum(net) / (batch);
free(p.x);
free(p.y);
if (avg_loss < 0) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
size_t chars = get_current_batch(net)*batch;
fprintf(stderr, "%d: %f, %f avg, %f rate, %lf seconds, %f epochs\n", i, loss, avg_loss, get_current_rate(net), sec(clock()-time), (float) chars/size);
for(j = 0; j < streams; ++j){
//printf("%d\n", j);
if(rand()%64 == 0){
//fprintf(stderr, "Reset\n");
offsets[j] = rand_size_t()%size;
reset_network_state(net, j);
}
}
if(i%10000==0){
char buff[256];
sprintf(buff, "%s/%s_%d.weights", backup_directory, base, i);
save_weights(net, buff);
}
if(i%100==0){
char buff[256];
sprintf(buff, "%s/%s.backup", backup_directory, base);
save_weights(net, buff);
}
}
char buff[256];
sprintf(buff, "%s/%s_final.weights", backup_directory, base);
save_weights(net, buff);
}
void print_symbol(int n, char **tokens){
if(tokens){
printf("%s ", tokens[n]);
} else {
printf("%c", n);
}
}
void test_char_rnn(char *cfgfile, char *weightfile, int num, char *seed, float temp, int rseed, char *token_file)
{
char **tokens = 0;
if(token_file){
size_t n;
tokens = read_tokens(token_file, &n);
}
srand(rseed);
char *base = basecfg(cfgfile);
fprintf(stderr, "%s\n", base);
network *net = load_network(cfgfile, weightfile, 0);
int inputs = net->inputs;
int i, j;
for(i = 0; i < net->n; ++i) net->layers[i].temperature = temp;
int c = 0;
int len = strlen(seed);
float *input = calloc(inputs, sizeof(float));
/*
fill_cpu(inputs, 0, input, 1);
for(i = 0; i < 10; ++i){
network_predict(net, input);
}
fill_cpu(inputs, 0, input, 1);
*/
for(i = 0; i < len-1; ++i){
c = seed[i];
input[c] = 1;
network_predict(net, input);
input[c] = 0;
print_symbol(c, tokens);
}
if(len) c = seed[len-1];
print_symbol(c, tokens);
for(i = 0; i < num; ++i){
input[c] = 1;
float *out = network_predict(net, input);
input[c] = 0;
for(j = 32; j < 127; ++j){
//printf("%d %c %f\n",j, j, out[j]);
}
for(j = 0; j < inputs; ++j){
if (out[j] < .0001) out[j] = 0;
}
c = sample_array(out, inputs);
print_symbol(c, tokens);
}
printf("\n");
}
void test_tactic_rnn_multi(char *cfgfile, char *weightfile, int num, float temp, int rseed, char *token_file)
{
char **tokens = 0;
if(token_file){
size_t n;
tokens = read_tokens(token_file, &n);
}
srand(rseed);
char *base = basecfg(cfgfile);
fprintf(stderr, "%s\n", base);
network *net = load_network(cfgfile, weightfile, 0);
int inputs = net->inputs;
int i, j;
for(i = 0; i < net->n; ++i) net->layers[i].temperature = temp;
int c = 0;
float *input = calloc(inputs, sizeof(float));
float *out = 0;
while(1){
reset_network_state(net, 0);
while((c = getc(stdin)) != EOF && c != 0){
input[c] = 1;
out = network_predict(net, input);
input[c] = 0;
}
for(i = 0; i < num; ++i){
for(j = 0; j < inputs; ++j){
if (out[j] < .0001) out[j] = 0;
}
int next = sample_array(out, inputs);
if(c == '.' && next == '\n') break;
c = next;
print_symbol(c, tokens);
input[c] = 1;
out = network_predict(net, input);
input[c] = 0;
}
printf("\n");
}
}
void test_tactic_rnn(char *cfgfile, char *weightfile, int num, float temp, int rseed, char *token_file)
{
char **tokens = 0;
if(token_file){
size_t n;
tokens = read_tokens(token_file, &n);
}
srand(rseed);
char *base = basecfg(cfgfile);
fprintf(stderr, "%s\n", base);
network *net = load_network(cfgfile, weightfile, 0);
int inputs = net->inputs;
int i, j;
for(i = 0; i < net->n; ++i) net->layers[i].temperature = temp;
int c = 0;
float *input = calloc(inputs, sizeof(float));
float *out = 0;
while((c = getc(stdin)) != EOF){
input[c] = 1;
out = network_predict(net, input);
input[c] = 0;
}
for(i = 0; i < num; ++i){
for(j = 0; j < inputs; ++j){
if (out[j] < .0001) out[j] = 0;
}
int next = sample_array(out, inputs);
if(c == '.' && next == '\n') break;
c = next;
print_symbol(c, tokens);
input[c] = 1;
out = network_predict(net, input);
input[c] = 0;
}
printf("\n");
}
void valid_tactic_rnn(char *cfgfile, char *weightfile, char *seed)
{
char *base = basecfg(cfgfile);
fprintf(stderr, "%s\n", base);
network *net = load_network(cfgfile, weightfile, 0);
int inputs = net->inputs;
int count = 0;
int words = 1;
int c;
int len = strlen(seed);
float *input = calloc(inputs, sizeof(float));
int i;
for(i = 0; i < len; ++i){
c = seed[i];
input[(int)c] = 1;
network_predict(net, input);
input[(int)c] = 0;
}
float sum = 0;
c = getc(stdin);
float log2 = log(2);
int in = 0;
while(c != EOF){
int next = getc(stdin);
if(next == EOF) break;
if(next < 0 || next >= 255) error("Out of range character");
input[c] = 1;
float *out = network_predict(net, input);
input[c] = 0;
if(c == '.' && next == '\n') in = 0;
if(!in) {
if(c == '>' && next == '>'){
in = 1;
++words;
}
c = next;
continue;
}
++count;
sum += log(out[next])/log2;
c = next;
printf("%d %d Perplexity: %4.4f Word Perplexity: %4.4f\n", count, words, pow(2, -sum/count), pow(2, -sum/words));
}
}
void valid_char_rnn(char *cfgfile, char *weightfile, char *seed)
{
char *base = basecfg(cfgfile);
fprintf(stderr, "%s\n", base);
network *net = load_network(cfgfile, weightfile, 0);
int inputs = net->inputs;
int count = 0;
int words = 1;
int c;
int len = strlen(seed);
float *input = calloc(inputs, sizeof(float));
int i;
for(i = 0; i < len; ++i){
c = seed[i];
input[(int)c] = 1;
network_predict(net, input);
input[(int)c] = 0;
}
float sum = 0;
c = getc(stdin);
float log2 = log(2);
while(c != EOF){
int next = getc(stdin);
if(next == EOF) break;
if(next < 0 || next >= 255) error("Out of range character");
++count;
if(next == ' ' || next == '\n' || next == '\t') ++words;
input[c] = 1;
float *out = network_predict(net, input);
input[c] = 0;
sum += log(out[next])/log2;
c = next;
printf("%d BPC: %4.4f Perplexity: %4.4f Word Perplexity: %4.4f\n", count, -sum/count, pow(2, -sum/count), pow(2, -sum/words));
}
}
void vec_char_rnn(char *cfgfile, char *weightfile, char *seed)
{
char *base = basecfg(cfgfile);
fprintf(stderr, "%s\n", base);
network *net = load_network(cfgfile, weightfile, 0);
int inputs = net->inputs;
int c;
int seed_len = strlen(seed);
float *input = calloc(inputs, sizeof(float));
int i;
char *line;
while((line=fgetl(stdin)) != 0){
reset_network_state(net, 0);
for(i = 0; i < seed_len; ++i){
c = seed[i];
input[(int)c] = 1;
network_predict(net, input);
input[(int)c] = 0;
}
strip(line);
int str_len = strlen(line);
for(i = 0; i < str_len; ++i){
c = line[i];
input[(int)c] = 1;
network_predict(net, input);
input[(int)c] = 0;
}
c = ' ';
input[(int)c] = 1;
network_predict(net, input);
input[(int)c] = 0;
layer l = net->layers[0];
#ifdef GPU
cuda_pull_array(l.output_gpu, l.output, l.outputs);
#endif
printf("%s", line);
for(i = 0; i < l.outputs; ++i){
printf(",%g", l.output[i]);
}
printf("\n");
}
}
void run_char_rnn(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *filename = find_char_arg(argc, argv, "-file", "data/shakespeare.txt");
char *seed = find_char_arg(argc, argv, "-seed", "\n\n");
int len = find_int_arg(argc, argv, "-len", 1000);
float temp = find_float_arg(argc, argv, "-temp", .7);
int rseed = find_int_arg(argc, argv, "-srand", time(0));
int clear = find_arg(argc, argv, "-clear");
int tokenized = find_arg(argc, argv, "-tokenized");
char *tokens = find_char_arg(argc, argv, "-tokens", 0);
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
if(0==strcmp(argv[2], "train")) train_char_rnn(cfg, weights, filename, clear, tokenized);
else if(0==strcmp(argv[2], "valid")) valid_char_rnn(cfg, weights, seed);
else if(0==strcmp(argv[2], "validtactic")) valid_tactic_rnn(cfg, weights, seed);
else if(0==strcmp(argv[2], "vec")) vec_char_rnn(cfg, weights, seed);
else if(0==strcmp(argv[2], "generate")) test_char_rnn(cfg, weights, len, seed, temp, rseed, tokens);
else if(0==strcmp(argv[2], "generatetactic")) test_tactic_rnn(cfg, weights, len, temp, rseed, tokens);
}

View File

@ -0,0 +1,207 @@
#include "darknet.h"
#ifdef OPENCV
image get_image_from_stream(CvCapture *cap);
void reconstruct_picture(network net, float *features, image recon, image update, float rate, float momentum, float lambda, int smooth_size, int iters);
typedef struct {
float *x;
float *y;
} float_pair;
float_pair get_rnn_vid_data(network net, char **files, int n, int batch, int steps)
{
int b;
assert(net.batch == steps + 1);
image out_im = get_network_image(net);
int output_size = out_im.w*out_im.h*out_im.c;
printf("%d %d %d\n", out_im.w, out_im.h, out_im.c);
float *feats = calloc(net.batch*batch*output_size, sizeof(float));
for(b = 0; b < batch; ++b){
int input_size = net.w*net.h*net.c;
float *input = calloc(input_size*net.batch, sizeof(float));
char *filename = files[rand()%n];
CvCapture *cap = cvCaptureFromFile(filename);
int frames = cvGetCaptureProperty(cap, CV_CAP_PROP_FRAME_COUNT);
int index = rand() % (frames - steps - 2);
if (frames < (steps + 4)){
--b;
free(input);
continue;
}
printf("frames: %d, index: %d\n", frames, index);
cvSetCaptureProperty(cap, CV_CAP_PROP_POS_FRAMES, index);
int i;
for(i = 0; i < net.batch; ++i){
cv::Mat src = cvQueryFrame(cap);
image im = mat_to_image(src);
rgbgr_image(im);
image re = resize_image(im, net.w, net.h);
//show_image(re, "loaded");
//cvWaitKey(10);
memcpy(input + i*input_size, re.data, input_size*sizeof(float));
free_image(im);
free_image(re);
}
float *output = network_predict(net, input);
free(input);
for(i = 0; i < net.batch; ++i){
memcpy(feats + (b + i*batch)*output_size, output + i*output_size, output_size*sizeof(float));
}
cvReleaseCapture(&cap);
}
//printf("%d %d %d\n", out_im.w, out_im.h, out_im.c);
float_pair p = {0};
p.x = feats;
p.y = feats + output_size*batch; //+ out_im.w*out_im.h*out_im.c;
return p;
}
void train_vid_rnn(char *cfgfile, char *weightfile)
{
char *train_videos = "data/vid/train.txt";
char *backup_directory = "/home/pjreddie/backup/";
srand(time(0));
char *base = basecfg(cfgfile);
printf("%s\n", base);
float avg_loss = -1;
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net.learning_rate, net.momentum, net.decay);
int imgs = net.batch*net.subdivisions;
int i = *net.seen/imgs;
list *plist = get_paths(train_videos);
int N = plist->size;
char **paths = (char **)list_to_array(plist);
clock_t time;
int steps = net.time_steps;
int batch = net.batch / net.time_steps;
network extractor = parse_network_cfg("cfg/extractor.cfg");
load_weights(&extractor, "/home/pjreddie/trained/yolo-coco.conv");
while(get_current_batch(net) < net.max_batches){
i += 1;
time=clock();
float_pair p = get_rnn_vid_data(extractor, paths, N, batch, steps);
copy_cpu(net.inputs*net.batch, p.x, 1, net.input, 1);
copy_cpu(net.truths*net.batch, p.y, 1, net.truth, 1);
float loss = train_network_datum(net) / (net.batch);
free(p.x);
if (avg_loss < 0) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
fprintf(stderr, "%d: %f, %f avg, %f rate, %lf seconds\n", i, loss, avg_loss, get_current_rate(net), sec(clock()-time));
if(i%100==0){
char buff[256];
sprintf(buff, "%s/%s_%d.weights", backup_directory, base, i);
save_weights(net, buff);
}
if(i%10==0){
char buff[256];
sprintf(buff, "%s/%s.backup", backup_directory, base);
save_weights(net, buff);
}
}
char buff[256];
sprintf(buff, "%s/%s_final.weights", backup_directory, base);
save_weights(net, buff);
}
image save_reconstruction(network net, image *init, float *feat, char *name, int i)
{
image recon;
if (init) {
recon = copy_image(*init);
} else {
recon = make_random_image(net.w, net.h, 3);
}
image update = make_image(net.w, net.h, 3);
reconstruct_picture(net, feat, recon, update, .01, .9, .1, 2, 50);
char buff[256];
sprintf(buff, "%s%d", name, i);
save_image(recon, buff);
free_image(update);
return recon;
}
void generate_vid_rnn(char *cfgfile, char *weightfile)
{
network extractor = parse_network_cfg("cfg/extractor.recon.cfg");
load_weights(&extractor, "/home/pjreddie/trained/yolo-coco.conv");
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
set_batch_network(&extractor, 1);
set_batch_network(&net, 1);
int i;
CvCapture *cap = cvCaptureFromFile("/extra/vid/ILSVRC2015/Data/VID/snippets/val/ILSVRC2015_val_00007030.mp4");
float *feat;
float *next;
image last;
for(i = 0; i < 25; ++i){
image im = get_image_from_stream(cap);
image re = resize_image(im, extractor.w, extractor.h);
feat = network_predict(extractor, re.data);
if(i > 0){
printf("%f %f\n", mean_array(feat, 14*14*512), variance_array(feat, 14*14*512));
printf("%f %f\n", mean_array(next, 14*14*512), variance_array(next, 14*14*512));
printf("%f\n", mse_array(feat, 14*14*512));
axpy_cpu(14*14*512, -1, feat, 1, next, 1);
printf("%f\n", mse_array(next, 14*14*512));
}
next = network_predict(net, feat);
free_image(im);
free_image(save_reconstruction(extractor, 0, feat, "feat", i));
free_image(save_reconstruction(extractor, 0, next, "next", i));
if (i==24) last = copy_image(re);
free_image(re);
}
for(i = 0; i < 30; ++i){
next = network_predict(net, next);
image new = save_reconstruction(extractor, &last, next, "new", i);
free_image(last);
last = new;
}
}
void run_vid_rnn(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
//char *filename = (argc > 5) ? argv[5]: 0;
if(0==strcmp(argv[2], "train")) train_vid_rnn(cfg, weights);
else if(0==strcmp(argv[2], "generate")) generate_vid_rnn(cfg, weights);
}
#else
void run_vid_rnn(int argc, char **argv){}
#endif

View File

@ -0,0 +1,255 @@
#include "darknet.h"
#include <sys/time.h>
#include <assert.h>
void train_segmenter(char *datacfg, char *cfgfile, char *weightfile, int *gpus, int ngpus, int clear, int display)
{
int i;
float avg_loss = -1;
char *base = basecfg(cfgfile);
printf("%s\n", base);
printf("%d\n", ngpus);
network **nets = calloc(ngpus, sizeof(network*));
srand(time(0));
int seed = rand();
for(i = 0; i < ngpus; ++i){
srand(seed);
#ifdef GPU
cuda_set_device(gpus[i]);
#endif
nets[i] = load_network(cfgfile, weightfile, clear);
nets[i]->learning_rate *= ngpus;
}
srand(time(0));
network *net = nets[0];
image pred = get_network_image(net);
int div = net->w/pred.w;
assert(pred.w * div == net->w);
assert(pred.h * div == net->h);
int imgs = net->batch * net->subdivisions * ngpus;
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
list *options = read_data_cfg(datacfg);
char *backup_directory = option_find_str(options, "backup", "/backup/");
char *train_list = option_find_str(options, "train", "data/train.list");
list *plist = get_paths(train_list);
char **paths = (char **)list_to_array(plist);
printf("%d\n", plist->size);
int N = plist->size;
load_args args = {0};
args.w = net->w;
args.h = net->h;
args.threads = 32;
args.scale = div;
args.min = net->min_crop;
args.max = net->max_crop;
args.angle = net->angle;
args.aspect = net->aspect;
args.exposure = net->exposure;
args.saturation = net->saturation;
args.hue = net->hue;
args.size = net->w;
args.classes = 80;
args.paths = paths;
args.n = imgs;
args.m = N;
args.type = SEGMENTATION_DATA;
data train;
data buffer;
pthread_t load_thread;
args.d = &buffer;
load_thread = load_data(args);
int epoch = (*net->seen)/N;
while(get_current_batch(net) < net->max_batches || net->max_batches == 0){
double time = what_time_is_it_now();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data(args);
printf("Loaded: %lf seconds\n", what_time_is_it_now()-time);
time = what_time_is_it_now();
float loss = 0;
#ifdef GPU
if(ngpus == 1){
loss = train_network(net, train);
} else {
loss = train_networks(nets, ngpus, train, 4);
}
#else
loss = train_network(net, train);
#endif
if(display){
image tr = float_to_image(net->w/div, net->h/div, 80, train.y.vals[net->batch*(net->subdivisions-1)]);
image im = float_to_image(net->w, net->h, net->c, train.X.vals[net->batch*(net->subdivisions-1)]);
image mask = mask_to_rgb(tr);
image prmask = mask_to_rgb(pred);
show_image(im, "input", 1);
show_image(prmask, "pred", 1);
show_image(mask, "truth", 100);
free_image(mask);
free_image(prmask);
}
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%ld, %.3f: %f, %f avg, %f rate, %lf seconds, %ld images\n", get_current_batch(net), (float)(*net->seen)/N, loss, avg_loss, get_current_rate(net), what_time_is_it_now()-time, *net->seen);
free_data(train);
if(*net->seen/N > epoch){
epoch = *net->seen/N;
char buff[256];
sprintf(buff, "%s/%s_%d.weights",backup_directory,base, epoch);
save_weights(net, buff);
}
if(get_current_batch(net)%100 == 0){
char buff[256];
sprintf(buff, "%s/%s.backup",backup_directory,base);
save_weights(net, buff);
}
}
char buff[256];
sprintf(buff, "%s/%s.weights", backup_directory, base);
save_weights(net, buff);
free_network(net);
free_ptrs((void**)paths, plist->size);
free_list(plist);
free(base);
}
void predict_segmenter(char *datafile, char *cfg, char *weights, char *filename)
{
network *net = load_network(cfg, weights, 0);
set_batch_network(net, 1);
srand(2222222);
clock_t time;
char buff[256];
char *input = buff;
while(1){
if(filename){
strncpy(input, filename, 256);
}else{
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input, 0, 0);
image sized = letterbox_image(im, net->w, net->h);
float *X = sized.data;
time=clock();
float *predictions = network_predict(net, X);
image pred = get_network_image(net);
image prmask = mask_to_rgb(pred);
printf("Predicted: %f\n", predictions[0]);
printf("%s: Predicted in %f seconds.\n", input, sec(clock()-time));
show_image(sized, "orig", 1);
show_image(prmask, "pred", 0);
free_image(im);
free_image(sized);
free_image(prmask);
if (filename) break;
}
}
void demo_segmenter(char *datacfg, char *cfg, char *weights, int cam_index, const char *filename)
{
#ifdef OPENCV
printf("Classifier Demo\n");
network *net = load_network(cfg, weights, 0);
set_batch_network(net, 1);
srand(2222222);
void * cap = open_video_stream(filename, cam_index, 0,0,0);
if(!cap) error("Couldn't connect to webcam.\n");
float fps = 0;
while(1){
struct timeval tval_before, tval_after, tval_result;
gettimeofday(&tval_before, NULL);
image in = get_image_from_stream(cap);
image in_s = letterbox_image(in, net->w, net->h);
network_predict(net, in_s.data);
printf("\033[2J");
printf("\033[1;1H");
printf("\nFPS:%.0f\n",fps);
image pred = get_network_image(net);
image prmask = mask_to_rgb(pred);
show_image(prmask, "Segmenter", 10);
free_image(in_s);
free_image(in);
free_image(prmask);
gettimeofday(&tval_after, NULL);
timersub(&tval_after, &tval_before, &tval_result);
float curr = 1000000.f/((long int)tval_result.tv_usec);
fps = .9*fps + .1*curr;
}
#endif
}
void run_segmenter(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *gpu_list = find_char_arg(argc, argv, "-gpus", 0);
int *gpus = 0;
int gpu = 0;
int ngpus = 0;
if(gpu_list){
printf("%s\n", gpu_list);
int len = strlen(gpu_list);
ngpus = 1;
int i;
for(i = 0; i < len; ++i){
if (gpu_list[i] == ',') ++ngpus;
}
gpus = calloc(ngpus, sizeof(int));
for(i = 0; i < ngpus; ++i){
gpus[i] = atoi(gpu_list);
gpu_list = strchr(gpu_list, ',')+1;
}
} else {
gpu = gpu_index;
gpus = &gpu;
ngpus = 1;
}
int cam_index = find_int_arg(argc, argv, "-c", 0);
int clear = find_arg(argc, argv, "-clear");
int display = find_arg(argc, argv, "-display");
char *data = argv[3];
char *cfg = argv[4];
char *weights = (argc > 5) ? argv[5] : 0;
char *filename = (argc > 6) ? argv[6]: 0;
if(0==strcmp(argv[2], "test")) predict_segmenter(data, cfg, weights, filename);
else if(0==strcmp(argv[2], "train")) train_segmenter(data, cfg, weights, gpus, ngpus, clear, display);
else if(0==strcmp(argv[2], "demo")) demo_segmenter(data, cfg, weights, cam_index, filename);
}

View File

@ -0,0 +1,120 @@
#include "darknet.h"
void train_super(char *cfgfile, char *weightfile, int clear)
{
char *train_images = "/data/imagenet/imagenet1k.train.list";
char *backup_directory = "/home/pjreddie/backup/";
srand(time(0));
char *base = basecfg(cfgfile);
printf("%s\n", base);
float avg_loss = -1;
network *net = load_network(cfgfile, weightfile, clear);
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
int imgs = net->batch*net->subdivisions;
int i = *net->seen/imgs;
data train, buffer;
list *plist = get_paths(train_images);
//int N = plist->size;
char **paths = (char **)list_to_array(plist);
load_args args = {0};
args.w = net->w;
args.h = net->h;
args.scale = 4;
args.paths = paths;
args.n = imgs;
args.m = plist->size;
args.d = &buffer;
args.type = SUPER_DATA;
pthread_t load_thread = load_data_in_thread(args);
clock_t time;
//while(i*imgs < N*120){
while(get_current_batch(net) < net->max_batches){
i += 1;
time=clock();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data_in_thread(args);
printf("Loaded: %lf seconds\n", sec(clock()-time));
time=clock();
float loss = train_network(net, train);
if (avg_loss < 0) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%d: %f, %f avg, %f rate, %lf seconds, %d images\n", i, loss, avg_loss, get_current_rate(net), sec(clock()-time), i*imgs);
if(i%1000==0){
char buff[256];
sprintf(buff, "%s/%s_%d.weights", backup_directory, base, i);
save_weights(net, buff);
}
if(i%100==0){
char buff[256];
sprintf(buff, "%s/%s.backup", backup_directory, base);
save_weights(net, buff);
}
free_data(train);
}
char buff[256];
sprintf(buff, "%s/%s_final.weights", backup_directory, base);
save_weights(net, buff);
}
void test_super(char *cfgfile, char *weightfile, char *filename)
{
network *net = load_network(cfgfile, weightfile, 0);
set_batch_network(net, 1);
srand(2222222);
clock_t time;
char buff[256];
char *input = buff;
while(1){
if(filename){
strncpy(input, filename, 256);
}else{
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input, 0, 0);
resize_network(net, im.w, im.h);
printf("%d %d\n", im.w, im.h);
float *X = im.data;
time=clock();
network_predict(net, X);
image out = get_network_image(net);
printf("%s: Predicted in %f seconds.\n", input, sec(clock()-time));
save_image(out, "out");
show_image(out, "out", 0);
free_image(im);
if (filename) break;
}
}
void run_super(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
char *filename = (argc > 5) ? argv[5] : 0;
int clear = find_arg(argc, argv, "-clear");
if(0==strcmp(argv[2], "train")) train_super(cfg, weights, clear);
else if(0==strcmp(argv[2], "test")) test_super(cfg, weights, filename);
/*
else if(0==strcmp(argv[2], "valid")) validate_super(cfg, weights);
*/
}

View File

@ -0,0 +1,83 @@
#include "darknet.h"
#include <sys/time.h>
void train_swag(char *cfgfile, char *weightfile)
{
char *train_images = "data/voc.0712.trainval";
char *backup_directory = "/home/pjreddie/backup/";
srand(time(0));
char *base = basecfg(cfgfile);
printf("%s\n", base);
float avg_loss = -1;
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net.learning_rate, net.momentum, net.decay);
int imgs = net.batch*net.subdivisions;
int i = *net.seen/imgs;
data train, buffer;
layer l = net.layers[net.n - 1];
int side = l.side;
int classes = l.classes;
float jitter = l.jitter;
list *plist = get_paths(train_images);
//int N = plist->size;
char **paths = (char **)list_to_array(plist);
load_args args = {0};
args.w = net.w;
args.h = net.h;
args.paths = paths;
args.n = imgs;
args.m = plist->size;
args.classes = classes;
args.jitter = jitter;
args.num_boxes = side;
args.d = &buffer;
args.type = REGION_DATA;
pthread_t load_thread = load_data_in_thread(args);
clock_t time;
//while(i*imgs < N*120){
while(get_current_batch(net) < net.max_batches){
i += 1;
time=clock();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data_in_thread(args);
printf("Loaded: %lf seconds\n", sec(clock()-time));
time=clock();
float loss = train_network(net, train);
if (avg_loss < 0) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%d: %f, %f avg, %f rate, %lf seconds, %d images\n", i, loss, avg_loss, get_current_rate(net), sec(clock()-time), i*imgs);
if(i%1000==0 || i == 600){
char buff[256];
sprintf(buff, "%s/%s_%d.weights", backup_directory, base, i);
save_weights(net, buff);
}
free_data(train);
}
char buff[256];
sprintf(buff, "%s/%s_final.weights", backup_directory, base);
save_weights(net, buff);
}
void run_swag(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
if(0==strcmp(argv[2], "train")) train_swag(cfg, weights);
}

View File

@ -0,0 +1,140 @@
#include "darknet.h"
void train_tag(char *cfgfile, char *weightfile, int clear)
{
srand(time(0));
float avg_loss = -1;
char *base = basecfg(cfgfile);
char *backup_directory = "/home/pjreddie/backup/";
printf("%s\n", base);
network *net = load_network(cfgfile, weightfile, clear);
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
int imgs = 1024;
list *plist = get_paths("/home/pjreddie/tag/train.list");
char **paths = (char **)list_to_array(plist);
printf("%d\n", plist->size);
int N = plist->size;
clock_t time;
pthread_t load_thread;
data train;
data buffer;
load_args args = {0};
args.w = net->w;
args.h = net->h;
args.min = net->w;
args.max = net->max_crop;
args.size = net->w;
args.paths = paths;
args.classes = net->outputs;
args.n = imgs;
args.m = N;
args.d = &buffer;
args.type = TAG_DATA;
args.angle = net->angle;
args.exposure = net->exposure;
args.saturation = net->saturation;
args.hue = net->hue;
fprintf(stderr, "%d classes\n", net->outputs);
load_thread = load_data_in_thread(args);
int epoch = (*net->seen)/N;
while(get_current_batch(net) < net->max_batches || net->max_batches == 0){
time=clock();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data_in_thread(args);
printf("Loaded: %lf seconds\n", sec(clock()-time));
time=clock();
float loss = train_network(net, train);
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%ld, %.3f: %f, %f avg, %f rate, %lf seconds, %ld images\n", get_current_batch(net), (float)(*net->seen)/N, loss, avg_loss, get_current_rate(net), sec(clock()-time), *net->seen);
free_data(train);
if(*net->seen/N > epoch){
epoch = *net->seen/N;
char buff[256];
sprintf(buff, "%s/%s_%d.weights",backup_directory,base, epoch);
save_weights(net, buff);
}
if(get_current_batch(net)%100 == 0){
char buff[256];
sprintf(buff, "%s/%s.backup",backup_directory,base);
save_weights(net, buff);
}
}
char buff[256];
sprintf(buff, "%s/%s.weights", backup_directory, base);
save_weights(net, buff);
pthread_join(load_thread, 0);
free_data(buffer);
free_network(net);
free_ptrs((void**)paths, plist->size);
free_list(plist);
free(base);
}
void test_tag(char *cfgfile, char *weightfile, char *filename)
{
network *net = load_network(cfgfile, weightfile, 0);
set_batch_network(net, 1);
srand(2222222);
int i = 0;
char **names = get_labels("data/tags.txt");
clock_t time;
int indexes[10];
char buff[256];
char *input = buff;
int size = net->w;
while(1){
if(filename){
strncpy(input, filename, 256);
}else{
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input, 0, 0);
image r = resize_min(im, size);
resize_network(net, r.w, r.h);
printf("%d %d\n", r.w, r.h);
float *X = r.data;
time=clock();
float *predictions = network_predict(net, X);
top_predictions(net, 10, indexes);
printf("%s: Predicted in %f seconds.\n", input, sec(clock()-time));
for(i = 0; i < 10; ++i){
int index = indexes[i];
printf("%.1f%%: %s\n", predictions[index]*100, names[index]);
}
if(r.data != im.data) free_image(r);
free_image(im);
if (filename) break;
}
}
void run_tag(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
int clear = find_arg(argc, argv, "-clear");
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
char *filename = (argc > 5) ? argv[5] : 0;
if(0==strcmp(argv[2], "train")) train_tag(cfg, weights, clear);
else if(0==strcmp(argv[2], "test")) test_tag(cfg, weights, filename);
}

View File

@ -0,0 +1,161 @@
#include "darknet.h"
void extract_voxel(char *lfile, char *rfile, char *prefix)
{
#ifdef OPENCV
int w = 1920;
int h = 1080;
int shift = 0;
int count = 0;
CvCapture *lcap = cvCaptureFromFile(lfile);
CvCapture *rcap = cvCaptureFromFile(rfile);
while(1){
image l = get_image_from_stream(lcap);
image r = get_image_from_stream(rcap);
if(!l.w || !r.w) break;
if(count%100 == 0) {
shift = best_3d_shift_r(l, r, -l.h/100, l.h/100);
printf("%d\n", shift);
}
image ls = crop_image(l, (l.w - w)/2, (l.h - h)/2, w, h);
image rs = crop_image(r, 105 + (r.w - w)/2, (r.h - h)/2 + shift, w, h);
char buff[256];
sprintf(buff, "%s_%05d_l", prefix, count);
save_image(ls, buff);
sprintf(buff, "%s_%05d_r", prefix, count);
save_image(rs, buff);
free_image(l);
free_image(r);
free_image(ls);
free_image(rs);
++count;
}
#else
printf("need OpenCV for extraction\n");
#endif
}
void train_voxel(char *cfgfile, char *weightfile)
{
char *train_images = "/data/imagenet/imagenet1k.train.list";
char *backup_directory = "/home/pjreddie/backup/";
srand(time(0));
char *base = basecfg(cfgfile);
printf("%s\n", base);
float avg_loss = -1;
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net.learning_rate, net.momentum, net.decay);
int imgs = net.batch*net.subdivisions;
int i = *net.seen/imgs;
data train, buffer;
list *plist = get_paths(train_images);
//int N = plist->size;
char **paths = (char **)list_to_array(plist);
load_args args = {0};
args.w = net.w;
args.h = net.h;
args.scale = 4;
args.paths = paths;
args.n = imgs;
args.m = plist->size;
args.d = &buffer;
args.type = SUPER_DATA;
pthread_t load_thread = load_data_in_thread(args);
clock_t time;
//while(i*imgs < N*120){
while(get_current_batch(net) < net.max_batches){
i += 1;
time=clock();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data_in_thread(args);
printf("Loaded: %lf seconds\n", sec(clock()-time));
time=clock();
float loss = train_network(net, train);
if (avg_loss < 0) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%d: %f, %f avg, %f rate, %lf seconds, %d images\n", i, loss, avg_loss, get_current_rate(net), sec(clock()-time), i*imgs);
if(i%1000==0){
char buff[256];
sprintf(buff, "%s/%s_%d.weights", backup_directory, base, i);
save_weights(net, buff);
}
if(i%100==0){
char buff[256];
sprintf(buff, "%s/%s.backup", backup_directory, base);
save_weights(net, buff);
}
free_data(train);
}
char buff[256];
sprintf(buff, "%s/%s_final.weights", backup_directory, base);
save_weights(net, buff);
}
void test_voxel(char *cfgfile, char *weightfile, char *filename)
{
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
set_batch_network(&net, 1);
srand(2222222);
clock_t time;
char buff[256];
char *input = buff;
while(1){
if(filename){
strncpy(input, filename, 256);
}else{
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input, 0, 0);
resize_network(&net, im.w, im.h);
printf("%d %d\n", im.w, im.h);
float *X = im.data;
time=clock();
network_predict(net, X);
image out = get_network_image(net);
printf("%s: Predicted in %f seconds.\n", input, sec(clock()-time));
save_image(out, "out");
free_image(im);
if (filename) break;
}
}
void run_voxel(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
char *filename = (argc > 5) ? argv[5] : 0;
if(0==strcmp(argv[2], "train")) train_voxel(cfg, weights);
else if(0==strcmp(argv[2], "test")) test_voxel(cfg, weights, filename);
else if(0==strcmp(argv[2], "extract")) extract_voxel(argv[3], argv[4], argv[5]);
/*
else if(0==strcmp(argv[2], "valid")) validate_voxel(cfg, weights);
*/
}

View File

@ -0,0 +1,144 @@
#include "darknet.h"
void train_writing(char *cfgfile, char *weightfile)
{
char *backup_directory = "/home/pjreddie/backup/";
srand(time(0));
float avg_loss = -1;
char *base = basecfg(cfgfile);
printf("%s\n", base);
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net.learning_rate, net.momentum, net.decay);
int imgs = net.batch*net.subdivisions;
list *plist = get_paths("figures.list");
char **paths = (char **)list_to_array(plist);
clock_t time;
int N = plist->size;
printf("N: %d\n", N);
image out = get_network_image(net);
data train, buffer;
load_args args = {0};
args.w = net.w;
args.h = net.h;
args.out_w = out.w;
args.out_h = out.h;
args.paths = paths;
args.n = imgs;
args.m = N;
args.d = &buffer;
args.type = WRITING_DATA;
pthread_t load_thread = load_data_in_thread(args);
int epoch = (*net.seen)/N;
while(get_current_batch(net) < net.max_batches || net.max_batches == 0){
time=clock();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data_in_thread(args);
printf("Loaded %lf seconds\n",sec(clock()-time));
time=clock();
float loss = train_network(net, train);
/*
image pred = float_to_image(64, 64, 1, out);
print_image(pred);
*/
/*
image im = float_to_image(256, 256, 3, train.X.vals[0]);
image lab = float_to_image(64, 64, 1, train.y.vals[0]);
image pred = float_to_image(64, 64, 1, out);
show_image(im, "image");
show_image(lab, "label");
print_image(lab);
show_image(pred, "pred");
cvWaitKey(0);
*/
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%ld, %.3f: %f, %f avg, %f rate, %lf seconds, %ld images\n", get_current_batch(net), (float)(*net.seen)/N, loss, avg_loss, get_current_rate(net), sec(clock()-time), *net.seen);
free_data(train);
if(get_current_batch(net)%100 == 0){
char buff[256];
sprintf(buff, "%s/%s_batch_%ld.weights", backup_directory, base, get_current_batch(net));
save_weights(net, buff);
}
if(*net.seen/N > epoch){
epoch = *net.seen/N;
char buff[256];
sprintf(buff, "%s/%s_%d.weights",backup_directory,base, epoch);
save_weights(net, buff);
}
}
}
void test_writing(char *cfgfile, char *weightfile, char *filename)
{
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
set_batch_network(&net, 1);
srand(2222222);
clock_t time;
char buff[256];
char *input = buff;
while(1){
if(filename){
strncpy(input, filename, 256);
}else{
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input, 0, 0);
resize_network(&net, im.w, im.h);
printf("%d %d %d\n", im.h, im.w, im.c);
float *X = im.data;
time=clock();
network_predict(net, X);
printf("%s: Predicted in %f seconds.\n", input, sec(clock()-time));
image pred = get_network_image(net);
image upsampled = resize_image(pred, im.w, im.h);
image thresh = threshold_image(upsampled, .5);
pred = thresh;
show_image(pred, "prediction");
show_image(im, "orig");
#ifdef OPENCV
cvWaitKey(0);
cvDestroyAllWindows();
#endif
free_image(upsampled);
free_image(thresh);
free_image(im);
if (filename) break;
}
}
void run_writing(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
char *filename = (argc > 5) ? argv[5] : 0;
if(0==strcmp(argv[2], "train")) train_writing(cfg, weights);
else if(0==strcmp(argv[2], "test")) test_writing(cfg, weights, filename);
}

View File

@ -0,0 +1,327 @@
#include "darknet.h"
char *voc_names[] = {"aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"};
void train_yolo(char *cfgfile, char *weightfile)
{
char *train_images = "/data/voc/train.txt";
char *backup_directory = "/home/pjreddie/backup/";
srand(time(0));
char *base = basecfg(cfgfile);
printf("%s\n", base);
float avg_loss = -1;
network *net = load_network(cfgfile, weightfile, 0);
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
int imgs = net->batch*net->subdivisions;
int i = *net->seen/imgs;
data train, buffer;
layer l = net->layers[net->n - 1];
int side = l.side;
int classes = l.classes;
float jitter = l.jitter;
list *plist = get_paths(train_images);
//int N = plist->size;
char **paths = (char **)list_to_array(plist);
load_args args = {0};
args.w = net->w;
args.h = net->h;
args.paths = paths;
args.n = imgs;
args.m = plist->size;
args.classes = classes;
args.jitter = jitter;
args.num_boxes = side;
args.d = &buffer;
args.type = REGION_DATA;
args.angle = net->angle;
args.exposure = net->exposure;
args.saturation = net->saturation;
args.hue = net->hue;
pthread_t load_thread = load_data_in_thread(args);
clock_t time;
//while(i*imgs < N*120){
while(get_current_batch(net) < net->max_batches){
i += 1;
time=clock();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data_in_thread(args);
printf("Loaded: %lf seconds\n", sec(clock()-time));
time=clock();
float loss = train_network(net, train);
if (avg_loss < 0) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%d: %f, %f avg, %f rate, %lf seconds, %d images\n", i, loss, avg_loss, get_current_rate(net), sec(clock()-time), i*imgs);
if(i%1000==0 || (i < 1000 && i%100 == 0)){
char buff[256];
sprintf(buff, "%s/%s_%d.weights", backup_directory, base, i);
save_weights(net, buff);
}
free_data(train);
}
char buff[256];
sprintf(buff, "%s/%s_final.weights", backup_directory, base);
save_weights(net, buff);
}
void print_yolo_detections(FILE **fps, char *id, int total, int classes, int w, int h, detection *dets)
{
int i, j;
for(i = 0; i < total; ++i){
float xmin = dets[i].bbox.x - dets[i].bbox.w/2.;
float xmax = dets[i].bbox.x + dets[i].bbox.w/2.;
float ymin = dets[i].bbox.y - dets[i].bbox.h/2.;
float ymax = dets[i].bbox.y + dets[i].bbox.h/2.;
if (xmin < 0) xmin = 0;
if (ymin < 0) ymin = 0;
if (xmax > w) xmax = w;
if (ymax > h) ymax = h;
for(j = 0; j < classes; ++j){
if (dets[i].prob[j]) fprintf(fps[j], "%s %f %f %f %f %f\n", id, dets[i].prob[j],
xmin, ymin, xmax, ymax);
}
}
}
void validate_yolo(char *cfg, char *weights)
{
network *net = load_network(cfg, weights, 0);
set_batch_network(net, 1);
fprintf(stderr, "Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
srand(time(0));
char *base = "results/comp4_det_test_";
//list *plist = get_paths("data/voc.2007.test");
list *plist = get_paths("/home/pjreddie/data/voc/2007_test.txt");
//list *plist = get_paths("data/voc.2012.test");
char **paths = (char **)list_to_array(plist);
layer l = net->layers[net->n-1];
int classes = l.classes;
int j;
FILE **fps = calloc(classes, sizeof(FILE *));
for(j = 0; j < classes; ++j){
char buff[1024];
snprintf(buff, 1024, "%s%s.txt", base, voc_names[j]);
fps[j] = fopen(buff, "w");
}
int m = plist->size;
int i=0;
int t;
float thresh = .001;
int nms = 1;
float iou_thresh = .5;
int nthreads = 8;
image *val = calloc(nthreads, sizeof(image));
image *val_resized = calloc(nthreads, sizeof(image));
image *buf = calloc(nthreads, sizeof(image));
image *buf_resized = calloc(nthreads, sizeof(image));
pthread_t *thr = calloc(nthreads, sizeof(pthread_t));
load_args args = {0};
args.w = net->w;
args.h = net->h;
args.type = IMAGE_DATA;
for(t = 0; t < nthreads; ++t){
args.path = paths[i+t];
args.im = &buf[t];
args.resized = &buf_resized[t];
thr[t] = load_data_in_thread(args);
}
time_t start = time(0);
for(i = nthreads; i < m+nthreads; i += nthreads){
fprintf(stderr, "%d\n", i);
for(t = 0; t < nthreads && i+t-nthreads < m; ++t){
pthread_join(thr[t], 0);
val[t] = buf[t];
val_resized[t] = buf_resized[t];
}
for(t = 0; t < nthreads && i+t < m; ++t){
args.path = paths[i+t];
args.im = &buf[t];
args.resized = &buf_resized[t];
thr[t] = load_data_in_thread(args);
}
for(t = 0; t < nthreads && i+t-nthreads < m; ++t){
char *path = paths[i+t-nthreads];
char *id = basecfg(path);
float *X = val_resized[t].data;
network_predict(net, X);
int w = val[t].w;
int h = val[t].h;
int nboxes = 0;
detection *dets = get_network_boxes(net, w, h, thresh, 0, 0, 0, &nboxes);
if (nms) do_nms_sort(dets, l.side*l.side*l.n, classes, iou_thresh);
print_yolo_detections(fps, id, l.side*l.side*l.n, classes, w, h, dets);
free_detections(dets, nboxes);
free(id);
free_image(val[t]);
free_image(val_resized[t]);
}
}
fprintf(stderr, "Total Detection Time: %f Seconds\n", (double)(time(0) - start));
}
void validate_yolo_recall(char *cfg, char *weights)
{
network *net = load_network(cfg, weights, 0);
set_batch_network(net, 1);
fprintf(stderr, "Learning Rate: %g, Momentum: %g, Decay: %g\n", net->learning_rate, net->momentum, net->decay);
srand(time(0));
char *base = "results/comp4_det_test_";
list *plist = get_paths("data/voc.2007.test");
char **paths = (char **)list_to_array(plist);
layer l = net->layers[net->n-1];
int classes = l.classes;
int side = l.side;
int j, k;
FILE **fps = calloc(classes, sizeof(FILE *));
for(j = 0; j < classes; ++j){
char buff[1024];
snprintf(buff, 1024, "%s%s.txt", base, voc_names[j]);
fps[j] = fopen(buff, "w");
}
int m = plist->size;
int i=0;
float thresh = .001;
float iou_thresh = .5;
float nms = 0;
int total = 0;
int correct = 0;
int proposals = 0;
float avg_iou = 0;
for(i = 0; i < m; ++i){
char *path = paths[i];
image orig = load_image_color(path, 0, 0);
image sized = resize_image(orig, net->w, net->h);
char *id = basecfg(path);
network_predict(net, sized.data);
int nboxes = 0;
detection *dets = get_network_boxes(net, orig.w, orig.h, thresh, 0, 0, 1, &nboxes);
if (nms) do_nms_obj(dets, side*side*l.n, 1, nms);
char labelpath[4096];
find_replace(path, "images", "labels", labelpath);
find_replace(labelpath, "JPEGImages", "labels", labelpath);
find_replace(labelpath, ".jpg", ".txt", labelpath);
find_replace(labelpath, ".JPEG", ".txt", labelpath);
int num_labels = 0;
box_label *truth = read_boxes(labelpath, &num_labels);
for(k = 0; k < side*side*l.n; ++k){
if(dets[k].objectness > thresh){
++proposals;
}
}
for (j = 0; j < num_labels; ++j) {
++total;
box t = {truth[j].x, truth[j].y, truth[j].w, truth[j].h};
float best_iou = 0;
for(k = 0; k < side*side*l.n; ++k){
float iou = box_iou(dets[k].bbox, t);
if(dets[k].objectness > thresh && iou > best_iou){
best_iou = iou;
}
}
avg_iou += best_iou;
if(best_iou > iou_thresh){
++correct;
}
}
fprintf(stderr, "%5d %5d %5d\tRPs/Img: %.2f\tIOU: %.2f%%\tRecall:%.2f%%\n", i, correct, total, (float)proposals/(i+1), avg_iou*100/total, 100.*correct/total);
free_detections(dets, nboxes);
free(id);
free_image(orig);
free_image(sized);
}
}
void test_yolo(char *cfgfile, char *weightfile, char *filename, float thresh)
{
image **alphabet = load_alphabet();
network *net = load_network(cfgfile, weightfile, 0);
layer l = net->layers[net->n-1];
set_batch_network(net, 1);
srand(2222222);
clock_t time;
char buff[256];
char *input = buff;
float nms=.4;
while(1){
if(filename){
strncpy(input, filename, 256);
} else {
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input,0,0);
image sized = resize_image(im, net->w, net->h);
float *X = sized.data;
time=clock();
network_predict(net, X);
printf("%s: Predicted in %f seconds.\n", input, sec(clock()-time));
int nboxes = 0;
detection *dets = get_network_boxes(net, 1, 1, thresh, 0, 0, 0, &nboxes);
if (nms) do_nms_sort(dets, l.side*l.side*l.n, l.classes, nms);
draw_detections(im, dets, l.side*l.side*l.n, thresh, voc_names, alphabet, 20);
save_image(im, "predictions");
show_image(im, "predictions", 0);
free_detections(dets, nboxes);
free_image(im);
free_image(sized);
if (filename) break;
}
}
void run_yolo(int argc, char **argv)
{
char *prefix = find_char_arg(argc, argv, "-prefix", 0);
float thresh = find_float_arg(argc, argv, "-thresh", .2);
int cam_index = find_int_arg(argc, argv, "-c", 0);
int frame_skip = find_int_arg(argc, argv, "-s", 0);
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
int avg = find_int_arg(argc, argv, "-avg", 1);
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
char *filename = (argc > 5) ? argv[5]: 0;
if(0==strcmp(argv[2], "test")) test_yolo(cfg, weights, filename, thresh);
else if(0==strcmp(argv[2], "train")) train_yolo(cfg, weights);
else if(0==strcmp(argv[2], "valid")) validate_yolo(cfg, weights);
else if(0==strcmp(argv[2], "recall")) validate_yolo_recall(cfg, weights);
else if(0==strcmp(argv[2], "demo")) demo(cfg, weights, thresh, cam_index, filename, voc_names, 20, frame_skip, prefix, avg, .5, 0,0,0,0);
}

View File

@ -0,0 +1,799 @@
#ifndef DARKNET_API
#define DARKNET_API
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <pthread.h>
#ifdef GPU
#define BLOCK 512
#include "cuda_runtime.h"
#include "curand.h"
#include "cublas_v2.h"
#ifdef CUDNN
#include "cudnn.h"
#endif
#endif
#define SECRET_NUM -1234
extern int gpu_index;
typedef struct{
int classes;
char **names;
} metadata;
metadata get_metadata(char *file);
typedef struct{
int *leaf;
int n;
int *parent;
int *child;
int *group;
char **name;
int groups;
int *group_size;
int *group_offset;
} tree;
tree *read_tree(char *filename);
typedef enum{
LOGISTIC, RELU, RELIE, LINEAR, RAMP, TANH, PLSE, LEAKY, ELU, LOGGY, STAIR, HARDTAN, LHTAN, SELU
} ACTIVATION;
typedef enum{
PNG, BMP, TGA, JPG
} IMTYPE;
typedef enum{
MULT, ADD, SUB, DIV
} BINARY_ACTIVATION;
typedef enum {
CONVOLUTIONAL,
DECONVOLUTIONAL,
CONNECTED,
MAXPOOL,
SOFTMAX,
DETECTION,
DROPOUT,
CROP,
ROUTE,
COST,
NORMALIZATION,
AVGPOOL,
LOCAL,
SHORTCUT,
ACTIVE,
RNN,
GRU,
LSTM,
CRNN,
BATCHNORM,
NETWORK,
XNOR,
REGION,
YOLO,
ISEG,
REORG,
UPSAMPLE,
LOGXENT,
L2NORM,
BLANK
} LAYER_TYPE;
typedef enum{
SSE, MASKED, L1, SEG, SMOOTH,WGAN
} COST_TYPE;
typedef struct{
int batch;
float learning_rate;
float momentum;
float decay;
int adam;
float B1;
float B2;
float eps;
int t;
} update_args;
struct network;
typedef struct network network;
struct layer;
typedef struct layer layer;
struct layer{
LAYER_TYPE type;
ACTIVATION activation;
COST_TYPE cost_type;
void (*forward) (struct layer, struct network);
void (*backward) (struct layer, struct network);
void (*update) (struct layer, update_args);
void (*forward_gpu) (struct layer, struct network);
void (*backward_gpu) (struct layer, struct network);
void (*update_gpu) (struct layer, update_args);
int batch_normalize;
int shortcut;
int batch;
int forced;
int flipped;
int inputs;
int outputs;
int nweights;
int nbiases;
int extra;
int truths;
int h,w,c;
int out_h, out_w, out_c;
int n;
int max_boxes;
int groups;
int size;
int side;
int stride;
int reverse;
int flatten;
int spatial;
int pad;
int sqrt;
int flip;
int index;
int binary;
int xnor;
int steps;
int hidden;
int truth;
float smooth;
float dot;
float angle;
float jitter;
float saturation;
float exposure;
float shift;
float ratio;
float learning_rate_scale;
float clip;
int noloss;
int softmax;
int classes;
int coords;
int background;
int rescore;
int objectness;
int joint;
int noadjust;
int reorg;
int log;
int tanh;
int *mask;
int total;
float alpha;
float beta;
float kappa;
float coord_scale;
float object_scale;
float noobject_scale;
float mask_scale;
float class_scale;
int bias_match;
int random;
float ignore_thresh;
float truth_thresh;
float thresh;
float focus;
int classfix;
int absolute;
int onlyforward;
int stopbackward;
int dontload;
int dontsave;
int dontloadscales;
int numload;
float temperature;
float probability;
float scale;
char * cweights;
int * indexes;
int * input_layers;
int * input_sizes;
int * map;
int * counts;
float ** sums;
float * rand;
float * cost;
float * state;
float * prev_state;
float * forgot_state;
float * forgot_delta;
float * state_delta;
float * combine_cpu;
float * combine_delta_cpu;
float * concat;
float * concat_delta;
float * binary_weights;
float * biases;
float * bias_updates;
float * scales;
float * scale_updates;
float * weights;
float * weight_updates;
float * delta;
float * output;
float * loss;
float * squared;
float * norms;
float * spatial_mean;
float * mean;
float * variance;
float * mean_delta;
float * variance_delta;
float * rolling_mean;
float * rolling_variance;
float * x;
float * x_norm;
float * m;
float * v;
float * bias_m;
float * bias_v;
float * scale_m;
float * scale_v;
float *z_cpu;
float *r_cpu;
float *h_cpu;
float * prev_state_cpu;
float *temp_cpu;
float *temp2_cpu;
float *temp3_cpu;
float *dh_cpu;
float *hh_cpu;
float *prev_cell_cpu;
float *cell_cpu;
float *f_cpu;
float *i_cpu;
float *g_cpu;
float *o_cpu;
float *c_cpu;
float *dc_cpu;
float * binary_input;
struct layer *input_layer;
struct layer *self_layer;
struct layer *output_layer;
struct layer *reset_layer;
struct layer *update_layer;
struct layer *state_layer;
struct layer *input_gate_layer;
struct layer *state_gate_layer;
struct layer *input_save_layer;
struct layer *state_save_layer;
struct layer *input_state_layer;
struct layer *state_state_layer;
struct layer *input_z_layer;
struct layer *state_z_layer;
struct layer *input_r_layer;
struct layer *state_r_layer;
struct layer *input_h_layer;
struct layer *state_h_layer;
struct layer *wz;
struct layer *uz;
struct layer *wr;
struct layer *ur;
struct layer *wh;
struct layer *uh;
struct layer *uo;
struct layer *wo;
struct layer *uf;
struct layer *wf;
struct layer *ui;
struct layer *wi;
struct layer *ug;
struct layer *wg;
tree *softmax_tree;
size_t workspace_size;
#ifdef GPU
int *indexes_gpu;
float *z_gpu;
float *r_gpu;
float *h_gpu;
float *temp_gpu;
float *temp2_gpu;
float *temp3_gpu;
float *dh_gpu;
float *hh_gpu;
float *prev_cell_gpu;
float *cell_gpu;
float *f_gpu;
float *i_gpu;
float *g_gpu;
float *o_gpu;
float *c_gpu;
float *dc_gpu;
float *m_gpu;
float *v_gpu;
float *bias_m_gpu;
float *scale_m_gpu;
float *bias_v_gpu;
float *scale_v_gpu;
float * combine_gpu;
float * combine_delta_gpu;
float * prev_state_gpu;
float * forgot_state_gpu;
float * forgot_delta_gpu;
float * state_gpu;
float * state_delta_gpu;
float * gate_gpu;
float * gate_delta_gpu;
float * save_gpu;
float * save_delta_gpu;
float * concat_gpu;
float * concat_delta_gpu;
float * binary_input_gpu;
float * binary_weights_gpu;
float * mean_gpu;
float * variance_gpu;
float * rolling_mean_gpu;
float * rolling_variance_gpu;
float * variance_delta_gpu;
float * mean_delta_gpu;
float * x_gpu;
float * x_norm_gpu;
float * weights_gpu;
float * weight_updates_gpu;
float * weight_change_gpu;
float * biases_gpu;
float * bias_updates_gpu;
float * bias_change_gpu;
float * scales_gpu;
float * scale_updates_gpu;
float * scale_change_gpu;
float * output_gpu;
float * loss_gpu;
float * delta_gpu;
float * rand_gpu;
float * squared_gpu;
float * norms_gpu;
#ifdef CUDNN
cudnnTensorDescriptor_t srcTensorDesc, dstTensorDesc;
cudnnTensorDescriptor_t dsrcTensorDesc, ddstTensorDesc;
cudnnTensorDescriptor_t normTensorDesc;
cudnnFilterDescriptor_t weightDesc;
cudnnFilterDescriptor_t dweightDesc;
cudnnConvolutionDescriptor_t convDesc;
cudnnConvolutionFwdAlgo_t fw_algo;
cudnnConvolutionBwdDataAlgo_t bd_algo;
cudnnConvolutionBwdFilterAlgo_t bf_algo;
#endif
#endif
};
void free_layer(layer);
typedef enum {
CONSTANT, STEP, EXP, POLY, STEPS, SIG, RANDOM
} learning_rate_policy;
typedef struct network{
int n;
int batch;
size_t *seen;
int *t;
float epoch;
int subdivisions;
layer *layers;
float *output;
learning_rate_policy policy;
float learning_rate;
float momentum;
float decay;
float gamma;
float scale;
float power;
int time_steps;
int step;
int max_batches;
float *scales;
int *steps;
int num_steps;
int burn_in;
int adam;
float B1;
float B2;
float eps;
int inputs;
int outputs;
int truths;
int notruth;
int h, w, c;
int max_crop;
int min_crop;
float max_ratio;
float min_ratio;
int center;
float angle;
float aspect;
float exposure;
float saturation;
float hue;
int random;
int gpu_index;
tree *hierarchy;
float *input;
float *truth;
float *delta;
float *workspace;
int train;
int index;
float *cost;
float clip;
#ifdef GPU
float *input_gpu;
float *truth_gpu;
float *delta_gpu;
float *output_gpu;
#endif
} network;
typedef struct {
int w;
int h;
float scale;
float rad;
float dx;
float dy;
float aspect;
} augment_args;
typedef struct {
int w;
int h;
int c;
float *data;
} image;
typedef struct{
float x, y, w, h;
} box;
typedef struct detection{
box bbox;
int classes;
float *prob;
float *mask;
float objectness;
int sort_class;
} detection;
typedef struct matrix{
int rows, cols;
float **vals;
} matrix;
typedef struct{
int w, h;
matrix X;
matrix y;
int shallow;
int *num_boxes;
box **boxes;
} data;
typedef enum {
CLASSIFICATION_DATA, DETECTION_DATA, CAPTCHA_DATA, REGION_DATA, IMAGE_DATA, COMPARE_DATA, WRITING_DATA, SWAG_DATA, TAG_DATA, OLD_CLASSIFICATION_DATA, STUDY_DATA, DET_DATA, SUPER_DATA, LETTERBOX_DATA, REGRESSION_DATA, SEGMENTATION_DATA, INSTANCE_DATA, ISEG_DATA
} data_type;
typedef struct load_args{
int threads;
char **paths;
char *path;
int n;
int m;
char **labels;
int h;
int w;
int out_w;
int out_h;
int nh;
int nw;
int num_boxes;
int min, max, size;
int classes;
int background;
int scale;
int center;
int coords;
float jitter;
float angle;
float aspect;
float saturation;
float exposure;
float hue;
data *d;
image *im;
image *resized;
data_type type;
tree *hierarchy;
} load_args;
typedef struct{
int id;
float x,y,w,h;
float left, right, top, bottom;
} box_label;
network *load_network(char *cfg, char *weights, int clear);
load_args get_base_args(network *net);
void free_data(data d);
typedef struct node{
void *val;
struct node *next;
struct node *prev;
} node;
typedef struct list{
int size;
node *front;
node *back;
} list;
pthread_t load_data(load_args args);
list *read_data_cfg(char *filename);
list *read_cfg(char *filename);
unsigned char *read_file(char *filename);
data resize_data(data orig, int w, int h);
data *tile_data(data orig, int divs, int size);
data select_data(data *orig, int *inds);
void forward_network(network *net);
void backward_network(network *net);
void update_network(network *net);
float dot_cpu(int N, float *X, int INCX, float *Y, int INCY);
void axpy_cpu(int N, float ALPHA, float *X, int INCX, float *Y, int INCY);
void copy_cpu(int N, float *X, int INCX, float *Y, int INCY);
void scal_cpu(int N, float ALPHA, float *X, int INCX);
void fill_cpu(int N, float ALPHA, float * X, int INCX);
void normalize_cpu(float *x, float *mean, float *variance, int batch, int filters, int spatial);
void softmax(float *input, int n, float temp, int stride, float *output);
int best_3d_shift_r(image a, image b, int min, int max);
#ifdef GPU
void axpy_gpu(int N, float ALPHA, float * X, int INCX, float * Y, int INCY);
void fill_gpu(int N, float ALPHA, float * X, int INCX);
void scal_gpu(int N, float ALPHA, float * X, int INCX);
void copy_gpu(int N, float * X, int INCX, float * Y, int INCY);
void cuda_set_device(int n);
void cuda_free(float *x_gpu);
float *cuda_make_array(float *x, size_t n);
void cuda_pull_array(float *x_gpu, float *x, size_t n);
float cuda_mag_array(float *x_gpu, size_t n);
void cuda_push_array(float *x_gpu, float *x, size_t n);
void forward_network_gpu(network *net);
void backward_network_gpu(network *net);
void update_network_gpu(network *net);
float train_networks(network **nets, int n, data d, int interval);
void sync_nets(network **nets, int n, int interval);
void harmless_update_network_gpu(network *net);
#endif
image get_label(image **characters, char *string, int size);
void draw_label(image a, int r, int c, image label, const float *rgb);
void save_image(image im, const char *name);
void save_image_options(image im, const char *name, IMTYPE f, int quality);
void get_next_batch(data d, int n, int offset, float *X, float *y);
void grayscale_image_3c(image im);
void normalize_image(image p);
void matrix_to_csv(matrix m);
float train_network_sgd(network *net, data d, int n);
void rgbgr_image(image im);
data copy_data(data d);
data concat_data(data d1, data d2);
data load_cifar10_data(char *filename);
float matrix_topk_accuracy(matrix truth, matrix guess, int k);
void matrix_add_matrix(matrix from, matrix to);
void scale_matrix(matrix m, float scale);
matrix csv_to_matrix(char *filename);
float *network_accuracies(network *net, data d, int n);
float train_network_datum(network *net);
image make_random_image(int w, int h, int c);
void denormalize_connected_layer(layer l);
void denormalize_convolutional_layer(layer l);
void statistics_connected_layer(layer l);
void rescale_weights(layer l, float scale, float trans);
void rgbgr_weights(layer l);
image *get_weights(layer l);
void demo(char *cfgfile, char *weightfile, float thresh, int cam_index, const char *filename, char **names, int classes, int frame_skip, char *prefix, int avg, float hier_thresh, int w, int h, int fps, int fullscreen);
void get_detection_detections(layer l, int w, int h, float thresh, detection *dets);
char *option_find_str(list *l, char *key, char *def);
int option_find_int(list *l, char *key, int def);
int option_find_int_quiet(list *l, char *key, int def);
network *parse_network_cfg(char *filename);
void save_weights(network *net, char *filename);
void load_weights(network *net, char *filename);
void save_weights_upto(network *net, char *filename, int cutoff);
void load_weights_upto(network *net, char *filename, int start, int cutoff);
void zero_objectness(layer l);
void get_region_detections(layer l, int w, int h, int netw, int neth, float thresh, int *map, float tree_thresh, int relative, detection *dets);
int get_yolo_detections(layer l, int w, int h, int netw, int neth, float thresh, int *map, int relative, detection *dets);
void free_network(network *net);
void set_batch_network(network *net, int b);
void set_temp_network(network *net, float t);
image load_image(char *filename, int w, int h, int c);
image load_image_color(char *filename, int w, int h);
image make_image(int w, int h, int c);
image resize_image(image im, int w, int h);
void censor_image(image im, int dx, int dy, int w, int h);
image letterbox_image(image im, int w, int h);
image crop_image(image im, int dx, int dy, int w, int h);
image center_crop_image(image im, int w, int h);
image resize_min(image im, int min);
image resize_max(image im, int max);
image threshold_image(image im, float thresh);
image mask_to_rgb(image mask);
int resize_network(network *net, int w, int h);
void free_matrix(matrix m);
void test_resize(char *filename);
int show_image(image p, const char *name, int ms);
int show_image(image p, const char *name);
image copy_image(image p);
void draw_box_width(image a, int x1, int y1, int x2, int y2, int w, float r, float g, float b);
float get_current_rate(network *net);
void composite_3d(char *f1, char *f2, char *out, int delta);
data load_data_old(char **paths, int n, int m, char **labels, int k, int w, int h);
size_t get_current_batch(network *net);
void constrain_image(image im);
image get_network_image_layer(network *net, int i);
layer get_network_output_layer(network *net);
void top_predictions(network *net, int n, int *index);
void flip_image(image a);
image float_to_image(int w, int h, int c, float *data);
void ghost_image(image source, image dest, int dx, int dy);
float network_accuracy(network *net, data d);
void random_distort_image(image im, float hue, float saturation, float exposure);
void fill_image(image m, float s);
image grayscale_image(image im);
void rotate_image_cw(image im, int times);
double what_time_is_it_now();
image rotate_image(image m, float rad);
void visualize_network(network *net);
float box_iou(box a, box b);
data load_all_cifar10();
box_label *read_boxes(char *filename, int *n);
box float_to_box(float *f, int stride);
void draw_detections(image im, detection *dets, int num, float thresh, char **names, image **alphabet, int classes);
matrix network_predict_data(network *net, data test);
image **load_alphabet();
image get_network_image(network *net);
float *network_predict(network *net, float *input);
int network_width(network *net);
int network_height(network *net);
float *network_predict_image(network *net, image im);
void network_detect(network *net, image im, float thresh, float hier_thresh, float nms, detection *dets);
detection *get_network_boxes(network *net, int w, int h, float thresh, float hier, int *map, int relative, int *num);
void free_detections(detection *dets, int n);
void reset_network_state(network *net, int b);
char **get_labels(char *filename);
void do_nms_obj(detection *dets, int total, int classes, float thresh);
void do_nms_sort(detection *dets, int total, int classes, float thresh);
matrix make_matrix(int rows, int cols);
#ifdef OPENCV
void *open_video_stream(const char *f, int c, int w, int h, int fps);
image get_image_from_stream(void *p);
void make_window(char *name, int w, int h, int fullscreen);
#endif
void free_image(image m);
float train_network(network *net, data d);
pthread_t load_data_in_thread(load_args args);
void load_data_blocking(load_args args);
list *get_paths(char *filename);
void hierarchy_predictions(float *predictions, int n, tree *hier, int only_leaves, int stride);
void change_leaves(tree *t, char *leaf_list);
int find_int_arg(int argc, char **argv, char *arg, int def);
float find_float_arg(int argc, char **argv, char *arg, float def);
int find_arg(int argc, char* argv[], char *arg);
char *find_char_arg(int argc, char **argv, char *arg, char *def);
char *basecfg(char *cfgfile);
void find_replace(char *str, char *orig, char *rep, char *output);
void free_ptrs(void **ptrs, int n);
char *fgetl(FILE *fp);
void strip(char *s);
float sec(clock_t clocks);
void **list_to_array(list *l);
void top_k(float *a, int n, int k, int *index);
int *read_map(char *filename);
void error(const char *s);
int max_index(float *a, int n);
int max_int_index(int *a, int n);
int sample_array(float *a, int n);
int *random_index_order(int min, int max);
void free_list(list *l);
float mse_array(float *a, int n);
float variance_array(float *a, int n);
float mag_array(float *a, int n);
void scale_array(float *a, int n, float s);
float mean_array(float *a, int n);
float sum_array(float *a, int n);
void normalize_array(float *a, int n);
int *read_intlist(char *s, int *n, int d);
size_t rand_size_t();
float rand_normal();
float rand_uniform(float min, float max);
#endif

View File

@ -0,0 +1,156 @@
from ctypes import *
import math
import random
def sample(probs):
s = sum(probs)
probs = [a/s for a in probs]
r = random.uniform(0, 1)
for i in range(len(probs)):
r = r - probs[i]
if r <= 0:
return i
return len(probs)-1
def c_array(ctype, values):
arr = (ctype*len(values))()
arr[:] = values
return arr
class BOX(Structure):
_fields_ = [("x", c_float),
("y", c_float),
("w", c_float),
("h", c_float)]
class DETECTION(Structure):
_fields_ = [("bbox", BOX),
("classes", c_int),
("prob", POINTER(c_float)),
("mask", POINTER(c_float)),
("objectness", c_float),
("sort_class", c_int)]
class IMAGE(Structure):
_fields_ = [("w", c_int),
("h", c_int),
("c", c_int),
("data", POINTER(c_float))]
class METADATA(Structure):
_fields_ = [("classes", c_int),
("names", POINTER(c_char_p))]
#lib = CDLL("/home/pjreddie/documents/darknet/libdarknet.so", RTLD_GLOBAL)
lib = CDLL("libdarknet.so", RTLD_GLOBAL)
lib.network_width.argtypes = [c_void_p]
lib.network_width.restype = c_int
lib.network_height.argtypes = [c_void_p]
lib.network_height.restype = c_int
predict = lib.network_predict
predict.argtypes = [c_void_p, POINTER(c_float)]
predict.restype = POINTER(c_float)
set_gpu = lib.cuda_set_device
set_gpu.argtypes = [c_int]
make_image = lib.make_image
make_image.argtypes = [c_int, c_int, c_int]
make_image.restype = IMAGE
get_network_boxes = lib.get_network_boxes
get_network_boxes.argtypes = [c_void_p, c_int, c_int, c_float, c_float, POINTER(c_int), c_int, POINTER(c_int)]
get_network_boxes.restype = POINTER(DETECTION)
make_network_boxes = lib.make_network_boxes
make_network_boxes.argtypes = [c_void_p]
make_network_boxes.restype = POINTER(DETECTION)
free_detections = lib.free_detections
free_detections.argtypes = [POINTER(DETECTION), c_int]
free_ptrs = lib.free_ptrs
free_ptrs.argtypes = [POINTER(c_void_p), c_int]
network_predict = lib.network_predict
network_predict.argtypes = [c_void_p, POINTER(c_float)]
reset_rnn = lib.reset_rnn
reset_rnn.argtypes = [c_void_p]
load_net = lib.load_network
load_net.argtypes = [c_char_p, c_char_p, c_int]
load_net.restype = c_void_p
do_nms_obj = lib.do_nms_obj
do_nms_obj.argtypes = [POINTER(DETECTION), c_int, c_int, c_float]
do_nms_sort = lib.do_nms_sort
do_nms_sort.argtypes = [POINTER(DETECTION), c_int, c_int, c_float]
free_image = lib.free_image
free_image.argtypes = [IMAGE]
letterbox_image = lib.letterbox_image
letterbox_image.argtypes = [IMAGE, c_int, c_int]
letterbox_image.restype = IMAGE
load_meta = lib.get_metadata
lib.get_metadata.argtypes = [c_char_p]
lib.get_metadata.restype = METADATA
load_image = lib.load_image_color
load_image.argtypes = [c_char_p, c_int, c_int]
load_image.restype = IMAGE
rgbgr_image = lib.rgbgr_image
rgbgr_image.argtypes = [IMAGE]
predict_image = lib.network_predict_image
predict_image.argtypes = [c_void_p, IMAGE]
predict_image.restype = POINTER(c_float)
def classify(net, meta, im):
out = predict_image(net, im)
res = []
for i in range(meta.classes):
res.append((meta.names[i], out[i]))
res = sorted(res, key=lambda x: -x[1])
return res
def detect(net, meta, image, thresh=.5, hier_thresh=.5, nms=.45):
im = load_image(image, 0, 0)
num = c_int(0)
pnum = pointer(num)
predict_image(net, im)
dets = get_network_boxes(net, im.w, im.h, thresh, hier_thresh, None, 0, pnum)
num = pnum[0]
if (nms): do_nms_obj(dets, num, meta.classes, nms);
res = []
for j in range(num):
for i in range(meta.classes):
if dets[j].prob[i] > 0:
b = dets[j].bbox
res.append((meta.names[i], dets[j].prob[i], (b.x, b.y, b.w, b.h)))
res = sorted(res, key=lambda x: -x[1])
free_image(im)
free_detections(dets, num)
return res
if __name__ == "__main__":
#net = load_net("cfg/densenet201.cfg", "/home/pjreddie/trained/densenet201.weights", 0)
#im = load_image("data/wolf.jpg", 0, 0)
#meta = load_meta("cfg/imagenet1k.data")
#r = classify(net, meta, im)
#print r[:10]
net = load_net("cfg/tiny-yolo.cfg", "tiny-yolo.weights", 0)
meta = load_meta("cfg/coco.data")
r = detect(net, meta, "data/dog.jpg")
print r

View File

@ -0,0 +1,37 @@
from darknet import *
def predict_tactic(net, s):
prob = 0
d = c_array(c_float, [0.0]*256)
tac = ''
if not len(s):
s = '\n'
for c in s[:-1]:
d[ord(c)] = 1
pred = predict(net, d)
d[ord(c)] = 0
c = s[-1]
while 1:
d[ord(c)] = 1
pred = predict(net, d)
d[ord(c)] = 0
pred = [pred[i] for i in range(256)]
ind = sample(pred)
c = chr(ind)
prob += math.log(pred[ind])
if len(tac) and tac[-1] == '.':
break
tac = tac + c
return (tac, prob)
def predict_tactics(net, s, n):
tacs = []
for i in range(n):
reset_rnn(net)
tacs.append(predict_tactic(net, s))
tacs = sorted(tacs, key=lambda x: -x[1])
return tacs
net = load_net("cfg/coq.test.cfg", "/home/pjreddie/backup/coq.backup", 0)
t = predict_tactics(net, "+++++\n", 10)
print t

View File

@ -0,0 +1,20 @@
mkdir -p images
mkdir -p images/orig
mkdir -p images/train
mkdir -p images/val
ffmpeg -i Face1.mp4 images/orig/face1_%6d.jpg
ffmpeg -i Face2.mp4 images/orig/face2_%6d.jpg
ffmpeg -i Face3.mp4 images/orig/face3_%6d.jpg
ffmpeg -i Face4.mp4 images/orig/face4_%6d.jpg
ffmpeg -i Face5.mp4 images/orig/face5_%6d.jpg
ffmpeg -i Face6.mp4 images/orig/face6_%6d.jpg
mogrify -resize 100x100^ -gravity center -crop 100x100+0+0 +repage images/orig/*
ls images/orig/* | shuf | head -n 1000 | xargs mv -t images/val
mv images/orig/* images/train
find `pwd`/images/train > dice.train.list -name \*.jpg
find `pwd`/images/val > dice.val.list -name \*.jpg

View File

@ -0,0 +1,5 @@
#!/bin/bash
# Usage:
# wget http://pjreddie.com/media/files/peek.weights
# scripts/gen_tactic.sh < data/goal.txt
./darknet rnn generatetactic cfg/gru.cfg peek.weights 2>/dev/null

View File

@ -0,0 +1,31 @@
#!/bin/bash
# Clone COCO API
git clone https://github.com/pdollar/coco
cd coco
mkdir images
cd images
# Download Images
wget -c https://pjreddie.com/media/files/train2014.zip
wget -c https://pjreddie.com/media/files/val2014.zip
# Unzip
unzip -q train2014.zip
unzip -q val2014.zip
cd ..
# Download COCO Metadata
wget -c https://pjreddie.com/media/files/instances_train-val2014.zip
wget -c https://pjreddie.com/media/files/coco/5k.part
wget -c https://pjreddie.com/media/files/coco/trainvalno5k.part
wget -c https://pjreddie.com/media/files/coco/labels.tgz
tar xzf labels.tgz
unzip -q instances_train-val2014.zip
# Set Up Image Lists
paste <(awk "{print \"$PWD\"}" <5k.part) 5k.part | tr -d '\t' > 5k.txt
paste <(awk "{print \"$PWD\"}" <trainvalno5k.part) trainvalno5k.part | tr -d '\t' > trainvalno5k.txt

View File

@ -0,0 +1,15 @@
#!/bin/bash
mkdir -p labelled
wd=`pwd`
for f in val/*.xml;
do
label=`grep -m1 "<name>" $f | grep -oP '<name>\K[^<]*'`
im=`echo $f | sed 's/val/imgs/; s/xml/JPEG/'`
out=`echo $im | sed 's/JPEG/'${label}'.JPEG/; s/imgs/labelled/'`
ln -s ${wd}/$im ${wd}/$out
done
find ${wd}/labelled -name \*.JPEG > inet.val.list

View File

@ -0,0 +1,59 @@
import xml.etree.ElementTree as ET
import pickle
import os
from os import listdir, getcwd
from os.path import join
sets=[('2012', 'train'), ('2012', 'val'), ('2007', 'train'), ('2007', 'val'), ('2007', 'test')]
classes = ["aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"]
def convert(size, box):
dw = 1./(size[0])
dh = 1./(size[1])
x = (box[0] + box[1])/2.0 - 1
y = (box[2] + box[3])/2.0 - 1
w = box[1] - box[0]
h = box[3] - box[2]
x = x*dw
w = w*dw
y = y*dh
h = h*dh
return (x,y,w,h)
def convert_annotation(year, image_id):
in_file = open('VOCdevkit/VOC%s/Annotations/%s.xml'%(year, image_id))
out_file = open('VOCdevkit/VOC%s/labels/%s.txt'%(year, image_id), 'w')
tree=ET.parse(in_file)
root = tree.getroot()
size = root.find('size')
w = int(size.find('width').text)
h = int(size.find('height').text)
for obj in root.iter('object'):
difficult = obj.find('difficult').text
cls = obj.find('name').text
if cls not in classes or int(difficult)==1:
continue
cls_id = classes.index(cls)
xmlbox = obj.find('bndbox')
b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text), float(xmlbox.find('ymax').text))
bb = convert((w,h), b)
out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')
wd = getcwd()
for year, image_set in sets:
if not os.path.exists('VOCdevkit/VOC%s/labels/'%(year)):
os.makedirs('VOCdevkit/VOC%s/labels/'%(year))
image_ids = open('VOCdevkit/VOC%s/ImageSets/Main/%s.txt'%(year, image_set)).read().strip().split()
list_file = open('%s_%s.txt'%(year, image_set), 'w')
for image_id in image_ids:
list_file.write('%s/VOCdevkit/VOC%s/JPEGImages/%s.jpg\n'%(wd, year, image_id))
convert_annotation(year, image_id)
list_file.close()
os.system("cat 2007_train.txt 2007_val.txt 2012_train.txt 2012_val.txt > train.txt")
os.system("cat 2007_train.txt 2007_val.txt 2007_test.txt 2012_train.txt 2012_val.txt > train.all.txt")

View File

@ -0,0 +1,204 @@
#include "cuda_runtime.h"
#include "curand.h"
#include "cublas_v2.h"
#include "activations.h"
#include "cuda.h"
__device__ float lhtan_activate_kernel(float x)
{
if(x < 0) return .001f*x;
if(x > 1) return .001f*(x-1.f) + 1.f;
return x;
}
__device__ float lhtan_gradient_kernel(float x)
{
if(x > 0 && x < 1) return 1;
return .001;
}
__device__ float hardtan_activate_kernel(float x)
{
if (x < -1) return -1;
if (x > 1) return 1;
return x;
}
__device__ float linear_activate_kernel(float x){return x;}
__device__ float logistic_activate_kernel(float x){return 1.f/(1.f + expf(-x));}
__device__ float loggy_activate_kernel(float x){return 2.f/(1.f + expf(-x)) - 1;}
__device__ float relu_activate_kernel(float x){return x*(x>0);}
__device__ float elu_activate_kernel(float x){return (x >= 0)*x + (x < 0)*(expf(x)-1);}
__device__ float selu_activate_kernel(float x){return (x >= 0)*1.0507f*x + (x < 0)*1.0507f*1.6732f*(expf(x)-1);}
__device__ float relie_activate_kernel(float x){return (x>0) ? x : .01f*x;}
__device__ float ramp_activate_kernel(float x){return x*(x>0)+.1f*x;}
__device__ float leaky_activate_kernel(float x){return (x>0) ? x : .1f*x;}
__device__ float tanh_activate_kernel(float x){return (2.f/(1 + expf(-2*x)) - 1);}
__device__ float plse_activate_kernel(float x)
{
if(x < -4) return .01f * (x + 4);
if(x > 4) return .01f * (x - 4) + 1;
return .125f*x + .5f;
}
__device__ float stair_activate_kernel(float x)
{
int n = floorf(x);
if (n%2 == 0) return floorf(x/2);
else return (x - n) + floorf(x/2);
}
__device__ float hardtan_gradient_kernel(float x)
{
if (x > -1 && x < 1) return 1;
return 0;
}
__device__ float linear_gradient_kernel(float x){return 1;}
__device__ float logistic_gradient_kernel(float x){return (1-x)*x;}
__device__ float loggy_gradient_kernel(float x)
{
float y = (x+1)/2;
return 2*(1-y)*y;
}
__device__ float relu_gradient_kernel(float x){return (x>0);}
__device__ float elu_gradient_kernel(float x){return (x >= 0) + (x < 0)*(x + 1);}
__device__ float selu_gradient_kernel(float x){return (x >= 0)*1.0507 + (x < 0)*(x + 1.0507*1.6732);}
__device__ float relie_gradient_kernel(float x){return (x>0) ? 1 : .01f;}
__device__ float ramp_gradient_kernel(float x){return (x>0)+.1f;}
__device__ float leaky_gradient_kernel(float x){return (x>0) ? 1 : .1f;}
__device__ float tanh_gradient_kernel(float x){return 1-x*x;}
__device__ float plse_gradient_kernel(float x){return (x < 0 || x > 1) ? .01f : .125f;}
__device__ float stair_gradient_kernel(float x)
{
if (floorf(x) == x) return 0;
return 1;
}
__device__ float activate_kernel(float x, ACTIVATION a)
{
switch(a){
case LINEAR:
return linear_activate_kernel(x);
case LOGISTIC:
return logistic_activate_kernel(x);
case LOGGY:
return loggy_activate_kernel(x);
case RELU:
return relu_activate_kernel(x);
case ELU:
return elu_activate_kernel(x);
case SELU:
return selu_activate_kernel(x);
case RELIE:
return relie_activate_kernel(x);
case RAMP:
return ramp_activate_kernel(x);
case LEAKY:
return leaky_activate_kernel(x);
case TANH:
return tanh_activate_kernel(x);
case PLSE:
return plse_activate_kernel(x);
case STAIR:
return stair_activate_kernel(x);
case HARDTAN:
return hardtan_activate_kernel(x);
case LHTAN:
return lhtan_activate_kernel(x);
}
return 0;
}
__device__ float gradient_kernel(float x, ACTIVATION a)
{
switch(a){
case LINEAR:
return linear_gradient_kernel(x);
case LOGISTIC:
return logistic_gradient_kernel(x);
case LOGGY:
return loggy_gradient_kernel(x);
case RELU:
return relu_gradient_kernel(x);
case ELU:
return elu_gradient_kernel(x);
case SELU:
return selu_gradient_kernel(x);
case RELIE:
return relie_gradient_kernel(x);
case RAMP:
return ramp_gradient_kernel(x);
case LEAKY:
return leaky_gradient_kernel(x);
case TANH:
return tanh_gradient_kernel(x);
case PLSE:
return plse_gradient_kernel(x);
case STAIR:
return stair_gradient_kernel(x);
case HARDTAN:
return hardtan_gradient_kernel(x);
case LHTAN:
return lhtan_gradient_kernel(x);
}
return 0;
}
__global__ void binary_gradient_array_kernel(float *x, float *dy, int n, int s, BINARY_ACTIVATION a, float *dx)
{
int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;
int i = id % s;
int b = id / s;
float x1 = x[b*s + i];
float x2 = x[b*s + s/2 + i];
if(id < n) {
float de = dy[id];
dx[b*s + i] = x2*de;
dx[b*s + s/2 + i] = x1*de;
}
}
void binary_gradient_array_gpu(float *x, float *dx, int n, int size, BINARY_ACTIVATION a, float *y)
{
binary_gradient_array_kernel<<<cuda_gridsize(n/2), BLOCK>>>(x, dx, n/2, size, a, y);
check_error(cudaPeekAtLastError());
}
__global__ void binary_activate_array_kernel(float *x, int n, int s, BINARY_ACTIVATION a, float *y)
{
int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;
int i = id % s;
int b = id / s;
float x1 = x[b*s + i];
float x2 = x[b*s + s/2 + i];
if(id < n) y[id] = x1*x2;
}
void binary_activate_array_gpu(float *x, int n, int size, BINARY_ACTIVATION a, float *y)
{
binary_activate_array_kernel<<<cuda_gridsize(n/2), BLOCK>>>(x, n/2, size, a, y);
check_error(cudaPeekAtLastError());
}
__global__ void activate_array_kernel(float *x, int n, ACTIVATION a)
{
int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;
if(i < n) x[i] = activate_kernel(x[i], a);
}
__global__ void gradient_array_kernel(float *x, int n, ACTIVATION a, float *delta)
{
int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;
if(i < n) delta[i] *= gradient_kernel(x[i], a);
}
void activate_array_gpu(float *x, int n, ACTIVATION a)
{
activate_array_kernel<<<cuda_gridsize(n), BLOCK>>>(x, n, a);
check_error(cudaPeekAtLastError());
}
void gradient_array_gpu(float *x, int n, ACTIVATION a, float *delta)
{
gradient_array_kernel<<<cuda_gridsize(n), BLOCK>>>(x, n, a, delta);
check_error(cudaPeekAtLastError());
}

View File

@ -0,0 +1,63 @@
#include "activation_layer.h"
#include "utils.h"
#include "cuda.h"
#include "blas.h"
#include "gemm.h"
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
layer make_activation_layer(int batch, int inputs, ACTIVATION activation)
{
layer l = {0};
l.type = ACTIVE;
l.inputs = inputs;
l.outputs = inputs;
l.batch=batch;
l.output = calloc(batch*inputs, sizeof(float*));
l.delta = calloc(batch*inputs, sizeof(float*));
l.forward = forward_activation_layer;
l.backward = backward_activation_layer;
#ifdef GPU
l.forward_gpu = forward_activation_layer_gpu;
l.backward_gpu = backward_activation_layer_gpu;
l.output_gpu = cuda_make_array(l.output, inputs*batch);
l.delta_gpu = cuda_make_array(l.delta, inputs*batch);
#endif
l.activation = activation;
fprintf(stderr, "Activation Layer: %d inputs\n", inputs);
return l;
}
void forward_activation_layer(layer l, network net)
{
copy_cpu(l.outputs*l.batch, net.input, 1, l.output, 1);
activate_array(l.output, l.outputs*l.batch, l.activation);
}
void backward_activation_layer(layer l, network net)
{
gradient_array(l.output, l.outputs*l.batch, l.activation, l.delta);
copy_cpu(l.outputs*l.batch, l.delta, 1, net.delta, 1);
}
#ifdef GPU
void forward_activation_layer_gpu(layer l, network net)
{
copy_gpu(l.outputs*l.batch, net.input_gpu, 1, l.output_gpu, 1);
activate_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation);
}
void backward_activation_layer_gpu(layer l, network net)
{
gradient_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation, l.delta_gpu);
copy_gpu(l.outputs*l.batch, l.delta_gpu, 1, net.delta_gpu, 1);
}
#endif

View File

@ -0,0 +1,19 @@
#ifndef ACTIVATION_LAYER_H
#define ACTIVATION_LAYER_H
#include "activations.h"
#include "layer.h"
#include "network.h"
layer make_activation_layer(int batch, int inputs, ACTIVATION activation);
void forward_activation_layer(layer l, network net);
void backward_activation_layer(layer l, network net);
#ifdef GPU
void forward_activation_layer_gpu(layer l, network net);
void backward_activation_layer_gpu(layer l, network net);
#endif
#endif

View File

@ -0,0 +1,150 @@
#include "activations.h"
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
char *get_activation_string(ACTIVATION a)
{
switch(a){
case LOGISTIC:
return "logistic";
case LOGGY:
return "loggy";
case RELU:
return "relu";
case ELU:
return "elu";
case SELU:
return "selu";
case RELIE:
return "relie";
case RAMP:
return "ramp";
case LINEAR:
return "linear";
case TANH:
return "tanh";
case PLSE:
return "plse";
case LEAKY:
return "leaky";
case STAIR:
return "stair";
case HARDTAN:
return "hardtan";
case LHTAN:
return "lhtan";
default:
break;
}
return "relu";
}
ACTIVATION get_activation(char *s)
{
if (strcmp(s, "logistic")==0) return LOGISTIC;
if (strcmp(s, "loggy")==0) return LOGGY;
if (strcmp(s, "relu")==0) return RELU;
if (strcmp(s, "elu")==0) return ELU;
if (strcmp(s, "selu")==0) return SELU;
if (strcmp(s, "relie")==0) return RELIE;
if (strcmp(s, "plse")==0) return PLSE;
if (strcmp(s, "hardtan")==0) return HARDTAN;
if (strcmp(s, "lhtan")==0) return LHTAN;
if (strcmp(s, "linear")==0) return LINEAR;
if (strcmp(s, "ramp")==0) return RAMP;
if (strcmp(s, "leaky")==0) return LEAKY;
if (strcmp(s, "tanh")==0) return TANH;
if (strcmp(s, "stair")==0) return STAIR;
fprintf(stderr, "Couldn't find activation function %s, going with ReLU\n", s);
return RELU;
}
float activate(float x, ACTIVATION a)
{
switch(a){
case LINEAR:
return linear_activate(x);
case LOGISTIC:
return logistic_activate(x);
case LOGGY:
return loggy_activate(x);
case RELU:
return relu_activate(x);
case ELU:
return elu_activate(x);
case SELU:
return selu_activate(x);
case RELIE:
return relie_activate(x);
case RAMP:
return ramp_activate(x);
case LEAKY:
return leaky_activate(x);
case TANH:
return tanh_activate(x);
case PLSE:
return plse_activate(x);
case STAIR:
return stair_activate(x);
case HARDTAN:
return hardtan_activate(x);
case LHTAN:
return lhtan_activate(x);
}
return 0;
}
void activate_array(float *x, const int n, const ACTIVATION a)
{
int i;
for(i = 0; i < n; ++i){
x[i] = activate(x[i], a);
}
}
float gradient(float x, ACTIVATION a)
{
switch(a){
case LINEAR:
return linear_gradient(x);
case LOGISTIC:
return logistic_gradient(x);
case LOGGY:
return loggy_gradient(x);
case RELU:
return relu_gradient(x);
case ELU:
return elu_gradient(x);
case SELU:
return selu_gradient(x);
case RELIE:
return relie_gradient(x);
case RAMP:
return ramp_gradient(x);
case LEAKY:
return leaky_gradient(x);
case TANH:
return tanh_gradient(x);
case PLSE:
return plse_gradient(x);
case STAIR:
return stair_gradient(x);
case HARDTAN:
return hardtan_gradient(x);
case LHTAN:
return lhtan_gradient(x);
}
return 0;
}
void gradient_array(const float *x, const int n, const ACTIVATION a, float *delta)
{
int i;
for(i = 0; i < n; ++i){
delta[i] *= gradient(x[i], a);
}
}

View File

@ -0,0 +1,87 @@
#ifndef ACTIVATIONS_H
#define ACTIVATIONS_H
#include "darknet.h"
#include "cuda.h"
#include "math.h"
ACTIVATION get_activation(char *s);
char *get_activation_string(ACTIVATION a);
float activate(float x, ACTIVATION a);
float gradient(float x, ACTIVATION a);
void gradient_array(const float *x, const int n, const ACTIVATION a, float *delta);
void activate_array(float *x, const int n, const ACTIVATION a);
#ifdef GPU
void activate_array_gpu(float *x, int n, ACTIVATION a);
void gradient_array_gpu(float *x, int n, ACTIVATION a, float *delta);
#endif
static inline float stair_activate(float x)
{
int n = floor(x);
if (n%2 == 0) return floor(x/2.);
else return (x - n) + floor(x/2.);
}
static inline float hardtan_activate(float x)
{
if (x < -1) return -1;
if (x > 1) return 1;
return x;
}
static inline float linear_activate(float x){return x;}
static inline float logistic_activate(float x){return 1./(1. + exp(-x));}
static inline float loggy_activate(float x){return 2./(1. + exp(-x)) - 1;}
static inline float relu_activate(float x){return x*(x>0);}
static inline float elu_activate(float x){return (x >= 0)*x + (x < 0)*(exp(x)-1);}
static inline float selu_activate(float x){return (x >= 0)*1.0507*x + (x < 0)*1.0507*1.6732*(exp(x)-1);}
static inline float relie_activate(float x){return (x>0) ? x : .01*x;}
static inline float ramp_activate(float x){return x*(x>0)+.1*x;}
static inline float leaky_activate(float x){return (x>0) ? x : .1*x;}
static inline float tanh_activate(float x){return (exp(2*x)-1)/(exp(2*x)+1);}
static inline float plse_activate(float x)
{
if(x < -4) return .01 * (x + 4);
if(x > 4) return .01 * (x - 4) + 1;
return .125*x + .5;
}
static inline float lhtan_activate(float x)
{
if(x < 0) return .001*x;
if(x > 1) return .001*(x-1) + 1;
return x;
}
static inline float lhtan_gradient(float x)
{
if(x > 0 && x < 1) return 1;
return .001;
}
static inline float hardtan_gradient(float x)
{
if (x > -1 && x < 1) return 1;
return 0;
}
static inline float linear_gradient(float x){return 1;}
static inline float logistic_gradient(float x){return (1-x)*x;}
static inline float loggy_gradient(float x)
{
float y = (x+1.)/2.;
return 2*(1-y)*y;
}
static inline float stair_gradient(float x)
{
if (floor(x) == x) return 0;
return 1;
}
static inline float relu_gradient(float x){return (x>0);}
static inline float elu_gradient(float x){return (x >= 0) + (x < 0)*(x + 1);}
static inline float selu_gradient(float x){return (x >= 0)*1.0507 + (x < 0)*(x + 1.0507*1.6732);}
static inline float relie_gradient(float x){return (x>0) ? 1 : .01;}
static inline float ramp_gradient(float x){return (x>0)+.1;}
static inline float leaky_gradient(float x){return (x>0) ? 1 : .1;}
static inline float tanh_gradient(float x){return 1-x*x;}
static inline float plse_gradient(float x){return (x < 0 || x > 1) ? .01 : .125;}
#endif

View File

@ -0,0 +1,71 @@
#include "avgpool_layer.h"
#include "cuda.h"
#include <stdio.h>
avgpool_layer make_avgpool_layer(int batch, int w, int h, int c)
{
fprintf(stderr, "avg %4d x%4d x%4d -> %4d\n", w, h, c, c);
avgpool_layer l = {0};
l.type = AVGPOOL;
l.batch = batch;
l.h = h;
l.w = w;
l.c = c;
l.out_w = 1;
l.out_h = 1;
l.out_c = c;
l.outputs = l.out_c;
l.inputs = h*w*c;
int output_size = l.outputs * batch;
l.output = calloc(output_size, sizeof(float));
l.delta = calloc(output_size, sizeof(float));
l.forward = forward_avgpool_layer;
l.backward = backward_avgpool_layer;
#ifdef GPU
l.forward_gpu = forward_avgpool_layer_gpu;
l.backward_gpu = backward_avgpool_layer_gpu;
l.output_gpu = cuda_make_array(l.output, output_size);
l.delta_gpu = cuda_make_array(l.delta, output_size);
#endif
return l;
}
void resize_avgpool_layer(avgpool_layer *l, int w, int h)
{
l->w = w;
l->h = h;
l->inputs = h*w*l->c;
}
void forward_avgpool_layer(const avgpool_layer l, network net)
{
int b,i,k;
for(b = 0; b < l.batch; ++b){
for(k = 0; k < l.c; ++k){
int out_index = k + b*l.c;
l.output[out_index] = 0;
for(i = 0; i < l.h*l.w; ++i){
int in_index = i + l.h*l.w*(k + b*l.c);
l.output[out_index] += net.input[in_index];
}
l.output[out_index] /= l.h*l.w;
}
}
}
void backward_avgpool_layer(const avgpool_layer l, network net)
{
int b,i,k;
for(b = 0; b < l.batch; ++b){
for(k = 0; k < l.c; ++k){
int out_index = k + b*l.c;
for(i = 0; i < l.h*l.w; ++i){
int in_index = i + l.h*l.w*(k + b*l.c);
net.delta[in_index] += l.delta[out_index] / (l.h*l.w);
}
}
}
}

View File

@ -0,0 +1,23 @@
#ifndef AVGPOOL_LAYER_H
#define AVGPOOL_LAYER_H
#include "image.h"
#include "cuda.h"
#include "layer.h"
#include "network.h"
typedef layer avgpool_layer;
image get_avgpool_image(avgpool_layer l);
avgpool_layer make_avgpool_layer(int batch, int w, int h, int c);
void resize_avgpool_layer(avgpool_layer *l, int w, int h);
void forward_avgpool_layer(const avgpool_layer l, network net);
void backward_avgpool_layer(const avgpool_layer l, network net);
#ifdef GPU
void forward_avgpool_layer_gpu(avgpool_layer l, network net);
void backward_avgpool_layer_gpu(avgpool_layer l, network net);
#endif
#endif

View File

@ -0,0 +1,59 @@
#include "cuda_runtime.h"
#include "curand.h"
#include "cublas_v2.h"
#include "avgpool_layer.h"
#include "cuda.h"
__global__ void forward_avgpool_layer_kernel(int n, int w, int h, int c, float *input, float *output)
{
int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;
if(id >= n) return;
int k = id % c;
id /= c;
int b = id;
int i;
int out_index = (k + c*b);
output[out_index] = 0;
for(i = 0; i < w*h; ++i){
int in_index = i + h*w*(k + b*c);
output[out_index] += input[in_index];
}
output[out_index] /= w*h;
}
__global__ void backward_avgpool_layer_kernel(int n, int w, int h, int c, float *in_delta, float *out_delta)
{
int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;
if(id >= n) return;
int k = id % c;
id /= c;
int b = id;
int i;
int out_index = (k + c*b);
for(i = 0; i < w*h; ++i){
int in_index = i + h*w*(k + b*c);
in_delta[in_index] += out_delta[out_index] / (w*h);
}
}
void forward_avgpool_layer_gpu(avgpool_layer layer, network net)
{
size_t n = layer.c*layer.batch;
forward_avgpool_layer_kernel<<<cuda_gridsize(n), BLOCK>>>(n, layer.w, layer.h, layer.c, net.input_gpu, layer.output_gpu);
check_error(cudaPeekAtLastError());
}
void backward_avgpool_layer_gpu(avgpool_layer layer, network net)
{
size_t n = layer.c*layer.batch;
backward_avgpool_layer_kernel<<<cuda_gridsize(n), BLOCK>>>(n, layer.w, layer.h, layer.c, net.delta_gpu, layer.delta_gpu);
check_error(cudaPeekAtLastError());
}

View File

@ -0,0 +1,279 @@
#include "convolutional_layer.h"
#include "batchnorm_layer.h"
#include "blas.h"
#include <stdio.h>
layer make_batchnorm_layer(int batch, int w, int h, int c)
{
fprintf(stderr, "Batch Normalization Layer: %d x %d x %d image\n", w,h,c);
layer l = {0};
l.type = BATCHNORM;
l.batch = batch;
l.h = l.out_h = h;
l.w = l.out_w = w;
l.c = l.out_c = c;
l.output = calloc(h * w * c * batch, sizeof(float));
l.delta = calloc(h * w * c * batch, sizeof(float));
l.inputs = w*h*c;
l.outputs = l.inputs;
l.scales = calloc(c, sizeof(float));
l.scale_updates = calloc(c, sizeof(float));
l.biases = calloc(c, sizeof(float));
l.bias_updates = calloc(c, sizeof(float));
int i;
for(i = 0; i < c; ++i){
l.scales[i] = 1;
}
l.mean = calloc(c, sizeof(float));
l.variance = calloc(c, sizeof(float));
l.rolling_mean = calloc(c, sizeof(float));
l.rolling_variance = calloc(c, sizeof(float));
l.forward = forward_batchnorm_layer;
l.backward = backward_batchnorm_layer;
#ifdef GPU
l.forward_gpu = forward_batchnorm_layer_gpu;
l.backward_gpu = backward_batchnorm_layer_gpu;
l.output_gpu = cuda_make_array(l.output, h * w * c * batch);
l.delta_gpu = cuda_make_array(l.delta, h * w * c * batch);
l.biases_gpu = cuda_make_array(l.biases, c);
l.bias_updates_gpu = cuda_make_array(l.bias_updates, c);
l.scales_gpu = cuda_make_array(l.scales, c);
l.scale_updates_gpu = cuda_make_array(l.scale_updates, c);
l.mean_gpu = cuda_make_array(l.mean, c);
l.variance_gpu = cuda_make_array(l.variance, c);
l.rolling_mean_gpu = cuda_make_array(l.mean, c);
l.rolling_variance_gpu = cuda_make_array(l.variance, c);
l.mean_delta_gpu = cuda_make_array(l.mean, c);
l.variance_delta_gpu = cuda_make_array(l.variance, c);
l.x_gpu = cuda_make_array(l.output, l.batch*l.outputs);
l.x_norm_gpu = cuda_make_array(l.output, l.batch*l.outputs);
#ifdef CUDNN
cudnnCreateTensorDescriptor(&l.normTensorDesc);
cudnnCreateTensorDescriptor(&l.dstTensorDesc);
cudnnSetTensor4dDescriptor(l.dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, l.batch, l.out_c, l.out_h, l.out_w);
cudnnSetTensor4dDescriptor(l.normTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, 1, l.out_c, 1, 1);
#endif
#endif
return l;
}
void backward_scale_cpu(float *x_norm, float *delta, int batch, int n, int size, float *scale_updates)
{
int i,b,f;
for(f = 0; f < n; ++f){
float sum = 0;
for(b = 0; b < batch; ++b){
for(i = 0; i < size; ++i){
int index = i + size*(f + n*b);
sum += delta[index] * x_norm[index];
}
}
scale_updates[f] += sum;
}
}
void mean_delta_cpu(float *delta, float *variance, int batch, int filters, int spatial, float *mean_delta)
{
int i,j,k;
for(i = 0; i < filters; ++i){
mean_delta[i] = 0;
for (j = 0; j < batch; ++j) {
for (k = 0; k < spatial; ++k) {
int index = j*filters*spatial + i*spatial + k;
mean_delta[i] += delta[index];
}
}
mean_delta[i] *= (-1./sqrt(variance[i] + .00001f));
}
}
void variance_delta_cpu(float *x, float *delta, float *mean, float *variance, int batch, int filters, int spatial, float *variance_delta)
{
int i,j,k;
for(i = 0; i < filters; ++i){
variance_delta[i] = 0;
for(j = 0; j < batch; ++j){
for(k = 0; k < spatial; ++k){
int index = j*filters*spatial + i*spatial + k;
variance_delta[i] += delta[index]*(x[index] - mean[i]);
}
}
variance_delta[i] *= -.5 * pow(variance[i] + .00001f, (float)(-3./2.));
}
}
void normalize_delta_cpu(float *x, float *mean, float *variance, float *mean_delta, float *variance_delta, int batch, int filters, int spatial, float *delta)
{
int f, j, k;
for(j = 0; j < batch; ++j){
for(f = 0; f < filters; ++f){
for(k = 0; k < spatial; ++k){
int index = j*filters*spatial + f*spatial + k;
delta[index] = delta[index] * 1./(sqrt(variance[f] + .00001f)) + variance_delta[f] * 2. * (x[index] - mean[f]) / (spatial * batch) + mean_delta[f]/(spatial*batch);
}
}
}
}
void resize_batchnorm_layer(layer *layer, int w, int h)
{
fprintf(stderr, "Not implemented\n");
}
void forward_batchnorm_layer(layer l, network net)
{
if(l.type == BATCHNORM) copy_cpu(l.outputs*l.batch, net.input, 1, l.output, 1);
copy_cpu(l.outputs*l.batch, l.output, 1, l.x, 1);
if(net.train){
mean_cpu(l.output, l.batch, l.out_c, l.out_h*l.out_w, l.mean);
variance_cpu(l.output, l.mean, l.batch, l.out_c, l.out_h*l.out_w, l.variance);
scal_cpu(l.out_c, .99, l.rolling_mean, 1);
axpy_cpu(l.out_c, .01, l.mean, 1, l.rolling_mean, 1);
scal_cpu(l.out_c, .99, l.rolling_variance, 1);
axpy_cpu(l.out_c, .01, l.variance, 1, l.rolling_variance, 1);
normalize_cpu(l.output, l.mean, l.variance, l.batch, l.out_c, l.out_h*l.out_w);
copy_cpu(l.outputs*l.batch, l.output, 1, l.x_norm, 1);
} else {
normalize_cpu(l.output, l.rolling_mean, l.rolling_variance, l.batch, l.out_c, l.out_h*l.out_w);
}
scale_bias(l.output, l.scales, l.batch, l.out_c, l.out_h*l.out_w);
add_bias(l.output, l.biases, l.batch, l.out_c, l.out_h*l.out_w);
}
void backward_batchnorm_layer(layer l, network net)
{
if(!net.train){
l.mean = l.rolling_mean;
l.variance = l.rolling_variance;
}
backward_bias(l.bias_updates, l.delta, l.batch, l.out_c, l.out_w*l.out_h);
backward_scale_cpu(l.x_norm, l.delta, l.batch, l.out_c, l.out_w*l.out_h, l.scale_updates);
scale_bias(l.delta, l.scales, l.batch, l.out_c, l.out_h*l.out_w);
mean_delta_cpu(l.delta, l.variance, l.batch, l.out_c, l.out_w*l.out_h, l.mean_delta);
variance_delta_cpu(l.x, l.delta, l.mean, l.variance, l.batch, l.out_c, l.out_w*l.out_h, l.variance_delta);
normalize_delta_cpu(l.x, l.mean, l.variance, l.mean_delta, l.variance_delta, l.batch, l.out_c, l.out_w*l.out_h, l.delta);
if(l.type == BATCHNORM) copy_cpu(l.outputs*l.batch, l.delta, 1, net.delta, 1);
}
#ifdef GPU
void pull_batchnorm_layer(layer l)
{
cuda_pull_array(l.scales_gpu, l.scales, l.c);
cuda_pull_array(l.rolling_mean_gpu, l.rolling_mean, l.c);
cuda_pull_array(l.rolling_variance_gpu, l.rolling_variance, l.c);
}
void push_batchnorm_layer(layer l)
{
cuda_push_array(l.scales_gpu, l.scales, l.c);
cuda_push_array(l.rolling_mean_gpu, l.rolling_mean, l.c);
cuda_push_array(l.rolling_variance_gpu, l.rolling_variance, l.c);
}
void forward_batchnorm_layer_gpu(layer l, network net)
{
if(l.type == BATCHNORM) copy_gpu(l.outputs*l.batch, net.input_gpu, 1, l.output_gpu, 1);
copy_gpu(l.outputs*l.batch, l.output_gpu, 1, l.x_gpu, 1);
if (net.train) {
#ifdef CUDNN
float one = 1;
float zero = 0;
cudnnBatchNormalizationForwardTraining(cudnn_handle(),
CUDNN_BATCHNORM_SPATIAL,
&one,
&zero,
l.dstTensorDesc,
l.x_gpu,
l.dstTensorDesc,
l.output_gpu,
l.normTensorDesc,
l.scales_gpu,
l.biases_gpu,
.01,
l.rolling_mean_gpu,
l.rolling_variance_gpu,
.00001,
l.mean_gpu,
l.variance_gpu);
#else
fast_mean_gpu(l.output_gpu, l.batch, l.out_c, l.out_h*l.out_w, l.mean_gpu);
fast_variance_gpu(l.output_gpu, l.mean_gpu, l.batch, l.out_c, l.out_h*l.out_w, l.variance_gpu);
scal_gpu(l.out_c, .99, l.rolling_mean_gpu, 1);
axpy_gpu(l.out_c, .01, l.mean_gpu, 1, l.rolling_mean_gpu, 1);
scal_gpu(l.out_c, .99, l.rolling_variance_gpu, 1);
axpy_gpu(l.out_c, .01, l.variance_gpu, 1, l.rolling_variance_gpu, 1);
copy_gpu(l.outputs*l.batch, l.output_gpu, 1, l.x_gpu, 1);
normalize_gpu(l.output_gpu, l.mean_gpu, l.variance_gpu, l.batch, l.out_c, l.out_h*l.out_w);
copy_gpu(l.outputs*l.batch, l.output_gpu, 1, l.x_norm_gpu, 1);
scale_bias_gpu(l.output_gpu, l.scales_gpu, l.batch, l.out_c, l.out_h*l.out_w);
add_bias_gpu(l.output_gpu, l.biases_gpu, l.batch, l.out_c, l.out_w*l.out_h);
#endif
} else {
normalize_gpu(l.output_gpu, l.rolling_mean_gpu, l.rolling_variance_gpu, l.batch, l.out_c, l.out_h*l.out_w);
scale_bias_gpu(l.output_gpu, l.scales_gpu, l.batch, l.out_c, l.out_h*l.out_w);
add_bias_gpu(l.output_gpu, l.biases_gpu, l.batch, l.out_c, l.out_w*l.out_h);
}
}
void backward_batchnorm_layer_gpu(layer l, network net)
{
if(!net.train){
l.mean_gpu = l.rolling_mean_gpu;
l.variance_gpu = l.rolling_variance_gpu;
}
#ifdef CUDNN
float one = 1;
float zero = 0;
cudnnBatchNormalizationBackward(cudnn_handle(),
CUDNN_BATCHNORM_SPATIAL,
&one,
&zero,
&one,
&one,
l.dstTensorDesc,
l.x_gpu,
l.dstTensorDesc,
l.delta_gpu,
l.dstTensorDesc,
l.x_norm_gpu,
l.normTensorDesc,
l.scales_gpu,
l.scale_updates_gpu,
l.bias_updates_gpu,
.00001,
l.mean_gpu,
l.variance_gpu);
copy_gpu(l.outputs*l.batch, l.x_norm_gpu, 1, l.delta_gpu, 1);
#else
backward_bias_gpu(l.bias_updates_gpu, l.delta_gpu, l.batch, l.out_c, l.out_w*l.out_h);
backward_scale_gpu(l.x_norm_gpu, l.delta_gpu, l.batch, l.out_c, l.out_w*l.out_h, l.scale_updates_gpu);
scale_bias_gpu(l.delta_gpu, l.scales_gpu, l.batch, l.out_c, l.out_h*l.out_w);
fast_mean_delta_gpu(l.delta_gpu, l.variance_gpu, l.batch, l.out_c, l.out_w*l.out_h, l.mean_delta_gpu);
fast_variance_delta_gpu(l.x_gpu, l.delta_gpu, l.mean_gpu, l.variance_gpu, l.batch, l.out_c, l.out_w*l.out_h, l.variance_delta_gpu);
normalize_delta_gpu(l.x_gpu, l.mean_gpu, l.variance_gpu, l.mean_delta_gpu, l.variance_delta_gpu, l.batch, l.out_c, l.out_w*l.out_h, l.delta_gpu);
#endif
if(l.type == BATCHNORM) copy_gpu(l.outputs*l.batch, l.delta_gpu, 1, net.delta_gpu, 1);
}
#endif

View File

@ -0,0 +1,19 @@
#ifndef BATCHNORM_LAYER_H
#define BATCHNORM_LAYER_H
#include "image.h"
#include "layer.h"
#include "network.h"
layer make_batchnorm_layer(int batch, int w, int h, int c);
void forward_batchnorm_layer(layer l, network net);
void backward_batchnorm_layer(layer l, network net);
#ifdef GPU
void forward_batchnorm_layer_gpu(layer l, network net);
void backward_batchnorm_layer_gpu(layer l, network net);
void pull_batchnorm_layer(layer l);
void push_batchnorm_layer(layer l);
#endif
#endif

View File

@ -0,0 +1,351 @@
#include "blas.h"
#include <math.h>
#include <assert.h>
#include <float.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
void reorg_cpu(float *x, int w, int h, int c, int batch, int stride, int forward, float *out)
{
int b,i,j,k;
int out_c = c/(stride*stride);
for(b = 0; b < batch; ++b){
for(k = 0; k < c; ++k){
for(j = 0; j < h; ++j){
for(i = 0; i < w; ++i){
int in_index = i + w*(j + h*(k + c*b));
int c2 = k % out_c;
int offset = k / out_c;
int w2 = i*stride + offset % stride;
int h2 = j*stride + offset / stride;
int out_index = w2 + w*stride*(h2 + h*stride*(c2 + out_c*b));
if(forward) out[out_index] = x[in_index];
else out[in_index] = x[out_index];
}
}
}
}
}
void flatten(float *x, int size, int layers, int batch, int forward)
{
float *swap = calloc(size*layers*batch, sizeof(float));
int i,c,b;
for(b = 0; b < batch; ++b){
for(c = 0; c < layers; ++c){
for(i = 0; i < size; ++i){
int i1 = b*layers*size + c*size + i;
int i2 = b*layers*size + i*layers + c;
if (forward) swap[i2] = x[i1];
else swap[i1] = x[i2];
}
}
}
memcpy(x, swap, size*layers*batch*sizeof(float));
free(swap);
}
void weighted_sum_cpu(float *a, float *b, float *s, int n, float *c)
{
int i;
for(i = 0; i < n; ++i){
c[i] = s[i]*a[i] + (1-s[i])*(b ? b[i] : 0);
}
}
void weighted_delta_cpu(float *a, float *b, float *s, float *da, float *db, float *ds, int n, float *dc)
{
int i;
for(i = 0; i < n; ++i){
if(da) da[i] += dc[i] * s[i];
if(db) db[i] += dc[i] * (1-s[i]);
ds[i] += dc[i] * (a[i] - b[i]);
}
}
void shortcut_cpu(int batch, int w1, int h1, int c1, float *add, int w2, int h2, int c2, float s1, float s2, float *out)
{
int stride = w1/w2;
int sample = w2/w1;
assert(stride == h1/h2);
assert(sample == h2/h1);
if(stride < 1) stride = 1;
if(sample < 1) sample = 1;
int minw = (w1 < w2) ? w1 : w2;
int minh = (h1 < h2) ? h1 : h2;
int minc = (c1 < c2) ? c1 : c2;
int i,j,k,b;
for(b = 0; b < batch; ++b){
for(k = 0; k < minc; ++k){
for(j = 0; j < minh; ++j){
for(i = 0; i < minw; ++i){
int out_index = i*sample + w2*(j*sample + h2*(k + c2*b));
int add_index = i*stride + w1*(j*stride + h1*(k + c1*b));
out[out_index] = s1*out[out_index] + s2*add[add_index];
}
}
}
}
}
void mean_cpu(float *x, int batch, int filters, int spatial, float *mean)
{
float scale = 1./(batch * spatial);
int i,j,k;
for(i = 0; i < filters; ++i){
mean[i] = 0;
for(j = 0; j < batch; ++j){
for(k = 0; k < spatial; ++k){
int index = j*filters*spatial + i*spatial + k;
mean[i] += x[index];
}
}
mean[i] *= scale;
}
}
void variance_cpu(float *x, float *mean, int batch, int filters, int spatial, float *variance)
{
float scale = 1./(batch * spatial - 1);
int i,j,k;
for(i = 0; i < filters; ++i){
variance[i] = 0;
for(j = 0; j < batch; ++j){
for(k = 0; k < spatial; ++k){
int index = j*filters*spatial + i*spatial + k;
variance[i] += pow((x[index] - mean[i]), 2);
}
}
variance[i] *= scale;
}
}
void l2normalize_cpu(float *x, float *dx, int batch, int filters, int spatial)
{
int b,f,i;
for(b = 0; b < batch; ++b){
for(i = 0; i < spatial; ++i){
float sum = 0;
for(f = 0; f < filters; ++f){
int index = b*filters*spatial + f*spatial + i;
sum += powf(x[index], 2);
}
sum = sqrtf(sum);
for(f = 0; f < filters; ++f){
int index = b*filters*spatial + f*spatial + i;
x[index] /= sum;
dx[index] = (1 - x[index]) / sum;
}
}
}
}
void normalize_cpu(float *x, float *mean, float *variance, int batch, int filters, int spatial)
{
int b, f, i;
for(b = 0; b < batch; ++b){
for(f = 0; f < filters; ++f){
for(i = 0; i < spatial; ++i){
int index = b*filters*spatial + f*spatial + i;
x[index] = (x[index] - mean[f])/(sqrt(variance[f]) + .000001f);
}
}
}
}
void const_cpu(int N, float ALPHA, float *X, int INCX)
{
int i;
for(i = 0; i < N; ++i) X[i*INCX] = ALPHA;
}
void mul_cpu(int N, float *X, int INCX, float *Y, int INCY)
{
int i;
for(i = 0; i < N; ++i) Y[i*INCY] *= X[i*INCX];
}
void pow_cpu(int N, float ALPHA, float *X, int INCX, float *Y, int INCY)
{
int i;
for(i = 0; i < N; ++i) Y[i*INCY] = pow(X[i*INCX], ALPHA);
}
void axpy_cpu(int N, float ALPHA, float *X, int INCX, float *Y, int INCY)
{
int i;
for(i = 0; i < N; ++i) Y[i*INCY] += ALPHA*X[i*INCX];
}
void scal_cpu(int N, float ALPHA, float *X, int INCX)
{
int i;
for(i = 0; i < N; ++i) X[i*INCX] *= ALPHA;
}
void fill_cpu(int N, float ALPHA, float *X, int INCX)
{
int i;
for(i = 0; i < N; ++i) X[i*INCX] = ALPHA;
}
void deinter_cpu(int NX, float *X, int NY, float *Y, int B, float *OUT)
{
int i, j;
int index = 0;
for(j = 0; j < B; ++j) {
for(i = 0; i < NX; ++i){
if(X) X[j*NX + i] += OUT[index];
++index;
}
for(i = 0; i < NY; ++i){
if(Y) Y[j*NY + i] += OUT[index];
++index;
}
}
}
void inter_cpu(int NX, float *X, int NY, float *Y, int B, float *OUT)
{
int i, j;
int index = 0;
for(j = 0; j < B; ++j) {
for(i = 0; i < NX; ++i){
OUT[index++] = X[j*NX + i];
}
for(i = 0; i < NY; ++i){
OUT[index++] = Y[j*NY + i];
}
}
}
void copy_cpu(int N, float *X, int INCX, float *Y, int INCY)
{
int i;
for(i = 0; i < N; ++i) Y[i*INCY] = X[i*INCX];
}
void mult_add_into_cpu(int N, float *X, float *Y, float *Z)
{
int i;
for(i = 0; i < N; ++i) Z[i] += X[i]*Y[i];
}
void smooth_l1_cpu(int n, float *pred, float *truth, float *delta, float *error)
{
int i;
for(i = 0; i < n; ++i){
float diff = truth[i] - pred[i];
float abs_val = fabs(diff);
if(abs_val < 1) {
error[i] = diff * diff;
delta[i] = diff;
}
else {
error[i] = 2*abs_val - 1;
delta[i] = (diff < 0) ? 1 : -1;
}
}
}
void l1_cpu(int n, float *pred, float *truth, float *delta, float *error)
{
int i;
for(i = 0; i < n; ++i){
float diff = truth[i] - pred[i];
error[i] = fabs(diff);
delta[i] = diff > 0 ? 1 : -1;
}
}
void softmax_x_ent_cpu(int n, float *pred, float *truth, float *delta, float *error)
{
int i;
for(i = 0; i < n; ++i){
float t = truth[i];
float p = pred[i];
error[i] = (t) ? -log(p) : 0;
delta[i] = t-p;
}
}
void logistic_x_ent_cpu(int n, float *pred, float *truth, float *delta, float *error)
{
int i;
for(i = 0; i < n; ++i){
float t = truth[i];
float p = pred[i];
error[i] = -t*log(p) - (1-t)*log(1-p);
delta[i] = t-p;
}
}
void l2_cpu(int n, float *pred, float *truth, float *delta, float *error)
{
int i;
for(i = 0; i < n; ++i){
float diff = truth[i] - pred[i];
error[i] = diff * diff;
delta[i] = diff;
}
}
float dot_cpu(int N, float *X, int INCX, float *Y, int INCY)
{
int i;
float dot = 0;
for(i = 0; i < N; ++i) dot += X[i*INCX] * Y[i*INCY];
return dot;
}
void softmax(float *input, int n, float temp, int stride, float *output)
{
int i;
float sum = 0;
float largest = -FLT_MAX;
for(i = 0; i < n; ++i){
if(input[i*stride] > largest) largest = input[i*stride];
}
for(i = 0; i < n; ++i){
float e = exp(input[i*stride]/temp - largest/temp);
sum += e;
output[i*stride] = e;
}
for(i = 0; i < n; ++i){
output[i*stride] /= sum;
}
}
void softmax_cpu(float *input, int n, int batch, int batch_offset, int groups, int group_offset, int stride, float temp, float *output)
{
int g, b;
for(b = 0; b < batch; ++b){
for(g = 0; g < groups; ++g){
softmax(input + b*batch_offset + g*group_offset, n, temp, stride, output + b*batch_offset + g*group_offset);
}
}
}
void upsample_cpu(float *in, int w, int h, int c, int batch, int stride, int forward, float scale, float *out)
{
int i, j, k, b;
for(b = 0; b < batch; ++b){
for(k = 0; k < c; ++k){
for(j = 0; j < h*stride; ++j){
for(i = 0; i < w*stride; ++i){
int in_index = b*w*h*c + k*w*h + (j/stride)*w + i/stride;
int out_index = b*w*h*c*stride*stride + k*w*h*stride*stride + j*w*stride + i;
if(forward) out[out_index] = scale*in[in_index];
else in[in_index] += scale*out[out_index];
}
}
}
}
}

View File

@ -0,0 +1,105 @@
#ifndef BLAS_H
#define BLAS_H
#include "darknet.h"
void flatten(float *x, int size, int layers, int batch, int forward);
void pm(int M, int N, float *A);
float *random_matrix(int rows, int cols);
void time_random_matrix(int TA, int TB, int m, int k, int n);
void reorg_cpu(float *x, int w, int h, int c, int batch, int stride, int forward, float *out);
void test_blas();
void inter_cpu(int NX, float *X, int NY, float *Y, int B, float *OUT);
void deinter_cpu(int NX, float *X, int NY, float *Y, int B, float *OUT);
void mult_add_into_cpu(int N, float *X, float *Y, float *Z);
void const_cpu(int N, float ALPHA, float *X, int INCX);
void constrain_gpu(int N, float ALPHA, float * X, int INCX);
void pow_cpu(int N, float ALPHA, float *X, int INCX, float *Y, int INCY);
void mul_cpu(int N, float *X, int INCX, float *Y, int INCY);
int test_gpu_blas();
void shortcut_cpu(int batch, int w1, int h1, int c1, float *add, int w2, int h2, int c2, float s1, float s2, float *out);
void mean_cpu(float *x, int batch, int filters, int spatial, float *mean);
void variance_cpu(float *x, float *mean, int batch, int filters, int spatial, float *variance);
void scale_bias(float *output, float *scales, int batch, int n, int size);
void backward_scale_cpu(float *x_norm, float *delta, int batch, int n, int size, float *scale_updates);
void mean_delta_cpu(float *delta, float *variance, int batch, int filters, int spatial, float *mean_delta);
void variance_delta_cpu(float *x, float *delta, float *mean, float *variance, int batch, int filters, int spatial, float *variance_delta);
void normalize_delta_cpu(float *x, float *mean, float *variance, float *mean_delta, float *variance_delta, int batch, int filters, int spatial, float *delta);
void l2normalize_cpu(float *x, float *dx, int batch, int filters, int spatial);
void smooth_l1_cpu(int n, float *pred, float *truth, float *delta, float *error);
void l2_cpu(int n, float *pred, float *truth, float *delta, float *error);
void l1_cpu(int n, float *pred, float *truth, float *delta, float *error);
void logistic_x_ent_cpu(int n, float *pred, float *truth, float *delta, float *error);
void softmax_x_ent_cpu(int n, float *pred, float *truth, float *delta, float *error);
void weighted_sum_cpu(float *a, float *b, float *s, int num, float *c);
void weighted_delta_cpu(float *a, float *b, float *s, float *da, float *db, float *ds, int n, float *dc);
void softmax(float *input, int n, float temp, int stride, float *output);
void softmax_cpu(float *input, int n, int batch, int batch_offset, int groups, int group_offset, int stride, float temp, float *output);
void upsample_cpu(float *in, int w, int h, int c, int batch, int stride, int forward, float scale, float *out);
#ifdef GPU
#include "cuda.h"
#include "tree.h"
void axpy_gpu(int N, float ALPHA, float * X, int INCX, float * Y, int INCY);
void axpy_gpu_offset(int N, float ALPHA, float * X, int OFFX, int INCX, float * Y, int OFFY, int INCY);
void copy_gpu(int N, float * X, int INCX, float * Y, int INCY);
void copy_gpu_offset(int N, float * X, int OFFX, int INCX, float * Y, int OFFY, int INCY);
void add_gpu(int N, float ALPHA, float * X, int INCX);
void supp_gpu(int N, float ALPHA, float * X, int INCX);
void mask_gpu(int N, float * X, float mask_num, float * mask, float val);
void scale_mask_gpu(int N, float * X, float mask_num, float * mask, float scale);
void const_gpu(int N, float ALPHA, float *X, int INCX);
void pow_gpu(int N, float ALPHA, float *X, int INCX, float *Y, int INCY);
void mul_gpu(int N, float *X, int INCX, float *Y, int INCY);
void mean_gpu(float *x, int batch, int filters, int spatial, float *mean);
void variance_gpu(float *x, float *mean, int batch, int filters, int spatial, float *variance);
void normalize_gpu(float *x, float *mean, float *variance, int batch, int filters, int spatial);
void l2normalize_gpu(float *x, float *dx, int batch, int filters, int spatial);
void normalize_delta_gpu(float *x, float *mean, float *variance, float *mean_delta, float *variance_delta, int batch, int filters, int spatial, float *delta);
void fast_mean_delta_gpu(float *delta, float *variance, int batch, int filters, int spatial, float *mean_delta);
void fast_variance_delta_gpu(float *x, float *delta, float *mean, float *variance, int batch, int filters, int spatial, float *variance_delta);
void fast_variance_gpu(float *x, float *mean, int batch, int filters, int spatial, float *variance);
void fast_mean_gpu(float *x, int batch, int filters, int spatial, float *mean);
void shortcut_gpu(int batch, int w1, int h1, int c1, float *add, int w2, int h2, int c2, float s1, float s2, float *out);
void scale_bias_gpu(float *output, float *biases, int batch, int n, int size);
void backward_scale_gpu(float *x_norm, float *delta, int batch, int n, int size, float *scale_updates);
void scale_bias_gpu(float *output, float *biases, int batch, int n, int size);
void add_bias_gpu(float *output, float *biases, int batch, int n, int size);
void backward_bias_gpu(float *bias_updates, float *delta, int batch, int n, int size);
void logistic_x_ent_gpu(int n, float *pred, float *truth, float *delta, float *error);
void softmax_x_ent_gpu(int n, float *pred, float *truth, float *delta, float *error);
void smooth_l1_gpu(int n, float *pred, float *truth, float *delta, float *error);
void l2_gpu(int n, float *pred, float *truth, float *delta, float *error);
void l1_gpu(int n, float *pred, float *truth, float *delta, float *error);
void wgan_gpu(int n, float *pred, float *truth, float *delta, float *error);
void weighted_delta_gpu(float *a, float *b, float *s, float *da, float *db, float *ds, int num, float *dc);
void weighted_sum_gpu(float *a, float *b, float *s, int num, float *c);
void mult_add_into_gpu(int num, float *a, float *b, float *c);
void inter_gpu(int NX, float *X, int NY, float *Y, int B, float *OUT);
void deinter_gpu(int NX, float *X, int NY, float *Y, int B, float *OUT);
void reorg_gpu(float *x, int w, int h, int c, int batch, int stride, int forward, float *out);
void softmax_gpu(float *input, int n, int batch, int batch_offset, int groups, int group_offset, int stride, float temp, float *output);
void adam_update_gpu(float *w, float *d, float *m, float *v, float B1, float B2, float eps, float decay, float rate, int n, int batch, int t);
void adam_gpu(int n, float *x, float *m, float *v, float B1, float B2, float rate, float eps, int t);
void flatten_gpu(float *x, int spatial, int layers, int batch, int forward, float *out);
void softmax_tree(float *input, int spatial, int batch, int stride, float temp, float *output, tree hier);
void upsample_gpu(float *in, int w, int h, int c, int batch, int stride, int forward, float scale, float *out);
#endif
#endif

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,357 @@
#include "box.h"
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
int nms_comparator(const void *pa, const void *pb)
{
detection a = *(detection *)pa;
detection b = *(detection *)pb;
float diff = 0;
if(b.sort_class >= 0){
diff = a.prob[b.sort_class] - b.prob[b.sort_class];
} else {
diff = a.objectness - b.objectness;
}
if(diff < 0) return 1;
else if(diff > 0) return -1;
return 0;
}
void do_nms_obj(detection *dets, int total, int classes, float thresh)
{
int i, j, k;
k = total-1;
for(i = 0; i <= k; ++i){
if(dets[i].objectness == 0){
detection swap = dets[i];
dets[i] = dets[k];
dets[k] = swap;
--k;
--i;
}
}
total = k+1;
for(i = 0; i < total; ++i){
dets[i].sort_class = -1;
}
qsort(dets, total, sizeof(detection), nms_comparator);
for(i = 0; i < total; ++i){
if(dets[i].objectness == 0) continue;
box a = dets[i].bbox;
for(j = i+1; j < total; ++j){
if(dets[j].objectness == 0) continue;
box b = dets[j].bbox;
if (box_iou(a, b) > thresh){
dets[j].objectness = 0;
for(k = 0; k < classes; ++k){
dets[j].prob[k] = 0;
}
}
}
}
}
void do_nms_sort(detection *dets, int total, int classes, float thresh)
{
int i, j, k;
k = total-1;
for(i = 0; i <= k; ++i){
if(dets[i].objectness == 0){
detection swap = dets[i];
dets[i] = dets[k];
dets[k] = swap;
--k;
--i;
}
}
total = k+1;
for(k = 0; k < classes; ++k){
for(i = 0; i < total; ++i){
dets[i].sort_class = k;
}
qsort(dets, total, sizeof(detection), nms_comparator);
for(i = 0; i < total; ++i){
if(dets[i].prob[k] == 0) continue;
box a = dets[i].bbox;
for(j = i+1; j < total; ++j){
box b = dets[j].bbox;
if (box_iou(a, b) > thresh){
dets[j].prob[k] = 0;
}
}
}
}
}
box float_to_box(float *f, int stride)
{
box b = {0};
b.x = f[0];
b.y = f[1*stride];
b.w = f[2*stride];
b.h = f[3*stride];
return b;
}
dbox derivative(box a, box b)
{
dbox d;
d.dx = 0;
d.dw = 0;
float l1 = a.x - a.w/2;
float l2 = b.x - b.w/2;
if (l1 > l2){
d.dx -= 1;
d.dw += .5;
}
float r1 = a.x + a.w/2;
float r2 = b.x + b.w/2;
if(r1 < r2){
d.dx += 1;
d.dw += .5;
}
if (l1 > r2) {
d.dx = -1;
d.dw = 0;
}
if (r1 < l2){
d.dx = 1;
d.dw = 0;
}
d.dy = 0;
d.dh = 0;
float t1 = a.y - a.h/2;
float t2 = b.y - b.h/2;
if (t1 > t2){
d.dy -= 1;
d.dh += .5;
}
float b1 = a.y + a.h/2;
float b2 = b.y + b.h/2;
if(b1 < b2){
d.dy += 1;
d.dh += .5;
}
if (t1 > b2) {
d.dy = -1;
d.dh = 0;
}
if (b1 < t2){
d.dy = 1;
d.dh = 0;
}
return d;
}
float overlap(float x1, float w1, float x2, float w2)
{
float l1 = x1 - w1/2;
float l2 = x2 - w2/2;
float left = l1 > l2 ? l1 : l2;
float r1 = x1 + w1/2;
float r2 = x2 + w2/2;
float right = r1 < r2 ? r1 : r2;
return right - left;
}
float box_intersection(box a, box b)
{
float w = overlap(a.x, a.w, b.x, b.w);
float h = overlap(a.y, a.h, b.y, b.h);
if(w < 0 || h < 0) return 0;
float area = w*h;
return area;
}
float box_union(box a, box b)
{
float i = box_intersection(a, b);
float u = a.w*a.h + b.w*b.h - i;
return u;
}
float box_iou(box a, box b)
{
return box_intersection(a, b)/box_union(a, b);
}
float box_rmse(box a, box b)
{
return sqrt(pow(a.x-b.x, 2) +
pow(a.y-b.y, 2) +
pow(a.w-b.w, 2) +
pow(a.h-b.h, 2));
}
dbox dintersect(box a, box b)
{
float w = overlap(a.x, a.w, b.x, b.w);
float h = overlap(a.y, a.h, b.y, b.h);
dbox dover = derivative(a, b);
dbox di;
di.dw = dover.dw*h;
di.dx = dover.dx*h;
di.dh = dover.dh*w;
di.dy = dover.dy*w;
return di;
}
dbox dunion(box a, box b)
{
dbox du;
dbox di = dintersect(a, b);
du.dw = a.h - di.dw;
du.dh = a.w - di.dh;
du.dx = -di.dx;
du.dy = -di.dy;
return du;
}
void test_dunion()
{
box a = {0, 0, 1, 1};
box dxa= {0+.0001, 0, 1, 1};
box dya= {0, 0+.0001, 1, 1};
box dwa= {0, 0, 1+.0001, 1};
box dha= {0, 0, 1, 1+.0001};
box b = {.5, .5, .2, .2};
dbox di = dunion(a,b);
printf("Union: %f %f %f %f\n", di.dx, di.dy, di.dw, di.dh);
float inter = box_union(a, b);
float xinter = box_union(dxa, b);
float yinter = box_union(dya, b);
float winter = box_union(dwa, b);
float hinter = box_union(dha, b);
xinter = (xinter - inter)/(.0001);
yinter = (yinter - inter)/(.0001);
winter = (winter - inter)/(.0001);
hinter = (hinter - inter)/(.0001);
printf("Union Manual %f %f %f %f\n", xinter, yinter, winter, hinter);
}
void test_dintersect()
{
box a = {0, 0, 1, 1};
box dxa= {0+.0001, 0, 1, 1};
box dya= {0, 0+.0001, 1, 1};
box dwa= {0, 0, 1+.0001, 1};
box dha= {0, 0, 1, 1+.0001};
box b = {.5, .5, .2, .2};
dbox di = dintersect(a,b);
printf("Inter: %f %f %f %f\n", di.dx, di.dy, di.dw, di.dh);
float inter = box_intersection(a, b);
float xinter = box_intersection(dxa, b);
float yinter = box_intersection(dya, b);
float winter = box_intersection(dwa, b);
float hinter = box_intersection(dha, b);
xinter = (xinter - inter)/(.0001);
yinter = (yinter - inter)/(.0001);
winter = (winter - inter)/(.0001);
hinter = (hinter - inter)/(.0001);
printf("Inter Manual %f %f %f %f\n", xinter, yinter, winter, hinter);
}
void test_box()
{
test_dintersect();
test_dunion();
box a = {0, 0, 1, 1};
box dxa= {0+.00001, 0, 1, 1};
box dya= {0, 0+.00001, 1, 1};
box dwa= {0, 0, 1+.00001, 1};
box dha= {0, 0, 1, 1+.00001};
box b = {.5, 0, .2, .2};
float iou = box_iou(a,b);
iou = (1-iou)*(1-iou);
printf("%f\n", iou);
dbox d = diou(a, b);
printf("%f %f %f %f\n", d.dx, d.dy, d.dw, d.dh);
float xiou = box_iou(dxa, b);
float yiou = box_iou(dya, b);
float wiou = box_iou(dwa, b);
float hiou = box_iou(dha, b);
xiou = ((1-xiou)*(1-xiou) - iou)/(.00001);
yiou = ((1-yiou)*(1-yiou) - iou)/(.00001);
wiou = ((1-wiou)*(1-wiou) - iou)/(.00001);
hiou = ((1-hiou)*(1-hiou) - iou)/(.00001);
printf("manual %f %f %f %f\n", xiou, yiou, wiou, hiou);
}
dbox diou(box a, box b)
{
float u = box_union(a,b);
float i = box_intersection(a,b);
dbox di = dintersect(a,b);
dbox du = dunion(a,b);
dbox dd = {0,0,0,0};
if(i <= 0 || 1) {
dd.dx = b.x - a.x;
dd.dy = b.y - a.y;
dd.dw = b.w - a.w;
dd.dh = b.h - a.h;
return dd;
}
dd.dx = 2*pow((1-(i/u)),1)*(di.dx*u - du.dx*i)/(u*u);
dd.dy = 2*pow((1-(i/u)),1)*(di.dy*u - du.dy*i)/(u*u);
dd.dw = 2*pow((1-(i/u)),1)*(di.dw*u - du.dw*i)/(u*u);
dd.dh = 2*pow((1-(i/u)),1)*(di.dh*u - du.dh*i)/(u*u);
return dd;
}
void do_nms(box *boxes, float **probs, int total, int classes, float thresh)
{
int i, j, k;
for(i = 0; i < total; ++i){
int any = 0;
for(k = 0; k < classes; ++k) any = any || (probs[i][k] > 0);
if(!any) {
continue;
}
for(j = i+1; j < total; ++j){
if (box_iou(boxes[i], boxes[j]) > thresh){
for(k = 0; k < classes; ++k){
if (probs[i][k] < probs[j][k]) probs[i][k] = 0;
else probs[j][k] = 0;
}
}
}
}
}
box encode_box(box b, box anchor)
{
box encode;
encode.x = (b.x - anchor.x) / anchor.w;
encode.y = (b.y - anchor.y) / anchor.h;
encode.w = log2(b.w / anchor.w);
encode.h = log2(b.h / anchor.h);
return encode;
}
box decode_box(box b, box anchor)
{
box decode;
decode.x = b.x * anchor.w + anchor.x;
decode.y = b.y * anchor.h + anchor.y;
decode.w = pow(2., b.w) * anchor.w;
decode.h = pow(2., b.h) * anchor.h;
return decode;
}

View File

@ -0,0 +1,14 @@
#ifndef BOX_H
#define BOX_H
#include "darknet.h"
typedef struct{
float dx, dy, dw, dh;
} dbox;
float box_rmse(box a, box b);
dbox diou(box a, box b);
box decode_box(box b, box anchor);
box encode_box(box b, box anchor);
#endif

View File

@ -0,0 +1 @@

View File

@ -0,0 +1,39 @@
#include <stdio.h>
#include <math.h>
void col2im_add_pixel(float *im, int height, int width, int channels,
int row, int col, int channel, int pad, float val)
{
row -= pad;
col -= pad;
if (row < 0 || col < 0 ||
row >= height || col >= width) return;
im[col + width*(row + height*channel)] += val;
}
//This one might be too, can't remember.
void col2im_cpu(float* data_col,
int channels, int height, int width,
int ksize, int stride, int pad, float* data_im)
{
int c,h,w;
int height_col = (height + 2*pad - ksize) / stride + 1;
int width_col = (width + 2*pad - ksize) / stride + 1;
int channels_col = channels * ksize * ksize;
for (c = 0; c < channels_col; ++c) {
int w_offset = c % ksize;
int h_offset = (c / ksize) % ksize;
int c_im = c / ksize / ksize;
for (h = 0; h < height_col; ++h) {
for (w = 0; w < width_col; ++w) {
int im_row = h_offset + h * stride;
int im_col = w_offset + w * stride;
int col_index = (c * height_col + h) * width_col + w;
double val = data_col[col_index];
col2im_add_pixel(data_im, height, width, channels,
im_row, im_col, c_im, pad, val);
}
}
}
}

View File

@ -0,0 +1,13 @@
#ifndef COL2IM_H
#define COL2IM_H
void col2im_cpu(float* data_col,
int channels, int height, int width,
int ksize, int stride, int pad, float* data_im);
#ifdef GPU
void col2im_gpu(float *data_col,
int channels, int height, int width,
int ksize, int stride, int pad, float *data_im);
#endif
#endif

View File

@ -0,0 +1,56 @@
#include "cuda_runtime.h"
#include "curand.h"
#include "cublas_v2.h"
#include "col2im.h"
#include "cuda.h"
// src: https://github.com/BVLC/caffe/blob/master/src/caffe/util/im2col.cu
// You may also want to read: https://github.com/BVLC/caffe/blob/master/LICENSE
__global__ void col2im_gpu_kernel(const int n, const float* data_col,
const int height, const int width, const int ksize,
const int pad,
const int stride,
const int height_col, const int width_col,
float *data_im) {
int index = blockIdx.x*blockDim.x+threadIdx.x;
for(; index < n; index += blockDim.x*gridDim.x){
float val = 0;
int w = index % width + pad;
int h = (index / width) % height + pad;
int c = index / (width * height);
// compute the start and end of the output
int w_col_start = (w < ksize) ? 0 : (w - ksize) / stride + 1;
int w_col_end = min(w / stride + 1, width_col);
int h_col_start = (h < ksize) ? 0 : (h - ksize) / stride + 1;
int h_col_end = min(h / stride + 1, height_col);
// equivalent implementation
int offset =
(c * ksize * ksize + h * ksize + w) * height_col * width_col;
int coeff_h_col = (1 - stride * ksize * height_col) * width_col;
int coeff_w_col = (1 - stride * height_col * width_col);
for (int h_col = h_col_start; h_col < h_col_end; ++h_col) {
for (int w_col = w_col_start; w_col < w_col_end; ++w_col) {
val += data_col[offset + h_col * coeff_h_col + w_col * coeff_w_col];
}
}
data_im[index] += val;
}
}
void col2im_gpu(float *data_col,
int channels, int height, int width,
int ksize, int stride, int pad, float *data_im){
// We are going to launch channels * height_col * width_col kernels, each
// kernel responsible for copying a single-channel grid.
int height_col = (height + 2 * pad - ksize) / stride + 1;
int width_col = (width + 2 * pad - ksize) / stride + 1;
int num_kernels = channels * height * width;
col2im_gpu_kernel<<<(num_kernels+BLOCK-1)/BLOCK,
BLOCK>>>(
num_kernels, data_col, height, width, ksize, pad,
stride, height_col,
width_col, data_im);
}

View File

@ -0,0 +1,352 @@
#include <stdio.h>
#include "network.h"
#include "detection_layer.h"
#include "cost_layer.h"
#include "utils.h"
#include "parser.h"
#include "box.h"
void train_compare(char *cfgfile, char *weightfile)
{
srand(time(0));
float avg_loss = -1;
char *base = basecfg(cfgfile);
char *backup_directory = "/home/pjreddie/backup/";
printf("%s\n", base);
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net.learning_rate, net.momentum, net.decay);
int imgs = 1024;
list *plist = get_paths("data/compare.train.list");
char **paths = (char **)list_to_array(plist);
int N = plist->size;
printf("%d\n", N);
clock_t time;
pthread_t load_thread;
data train;
data buffer;
load_args args = {0};
args.w = net.w;
args.h = net.h;
args.paths = paths;
args.classes = 20;
args.n = imgs;
args.m = N;
args.d = &buffer;
args.type = COMPARE_DATA;
load_thread = load_data_in_thread(args);
int epoch = *net.seen/N;
int i = 0;
while(1){
++i;
time=clock();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data_in_thread(args);
printf("Loaded: %lf seconds\n", sec(clock()-time));
time=clock();
float loss = train_network(net, train);
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%.3f: %f, %f avg, %lf seconds, %ld images\n", (float)*net.seen/N, loss, avg_loss, sec(clock()-time), *net.seen);
free_data(train);
if(i%100 == 0){
char buff[256];
sprintf(buff, "%s/%s_%d_minor_%d.weights",backup_directory,base, epoch, i);
save_weights(net, buff);
}
if(*net.seen/N > epoch){
epoch = *net.seen/N;
i = 0;
char buff[256];
sprintf(buff, "%s/%s_%d.weights",backup_directory,base, epoch);
save_weights(net, buff);
if(epoch%22 == 0) net.learning_rate *= .1;
}
}
pthread_join(load_thread, 0);
free_data(buffer);
free_network(net);
free_ptrs((void**)paths, plist->size);
free_list(plist);
free(base);
}
void validate_compare(char *filename, char *weightfile)
{
int i = 0;
network net = parse_network_cfg(filename);
if(weightfile){
load_weights(&net, weightfile);
}
srand(time(0));
list *plist = get_paths("data/compare.val.list");
//list *plist = get_paths("data/compare.val.old");
char **paths = (char **)list_to_array(plist);
int N = plist->size/2;
free_list(plist);
clock_t time;
int correct = 0;
int total = 0;
int splits = 10;
int num = (i+1)*N/splits - i*N/splits;
data val, buffer;
load_args args = {0};
args.w = net.w;
args.h = net.h;
args.paths = paths;
args.classes = 20;
args.n = num;
args.m = 0;
args.d = &buffer;
args.type = COMPARE_DATA;
pthread_t load_thread = load_data_in_thread(args);
for(i = 1; i <= splits; ++i){
time=clock();
pthread_join(load_thread, 0);
val = buffer;
num = (i+1)*N/splits - i*N/splits;
char **part = paths+(i*N/splits);
if(i != splits){
args.paths = part;
load_thread = load_data_in_thread(args);
}
printf("Loaded: %d images in %lf seconds\n", val.X.rows, sec(clock()-time));
time=clock();
matrix pred = network_predict_data(net, val);
int j,k;
for(j = 0; j < val.y.rows; ++j){
for(k = 0; k < 20; ++k){
if(val.y.vals[j][k*2] != val.y.vals[j][k*2+1]){
++total;
if((val.y.vals[j][k*2] < val.y.vals[j][k*2+1]) == (pred.vals[j][k*2] < pred.vals[j][k*2+1])){
++correct;
}
}
}
}
free_matrix(pred);
printf("%d: Acc: %f, %lf seconds, %d images\n", i, (float)correct/total, sec(clock()-time), val.X.rows);
free_data(val);
}
}
typedef struct {
network net;
char *filename;
int class;
int classes;
float elo;
float *elos;
} sortable_bbox;
int total_compares = 0;
int current_class = 0;
int elo_comparator(const void*a, const void *b)
{
sortable_bbox box1 = *(sortable_bbox*)a;
sortable_bbox box2 = *(sortable_bbox*)b;
if(box1.elos[current_class] == box2.elos[current_class]) return 0;
if(box1.elos[current_class] > box2.elos[current_class]) return -1;
return 1;
}
int bbox_comparator(const void *a, const void *b)
{
++total_compares;
sortable_bbox box1 = *(sortable_bbox*)a;
sortable_bbox box2 = *(sortable_bbox*)b;
network net = box1.net;
int class = box1.class;
image im1 = load_image_color(box1.filename, net.w, net.h);
image im2 = load_image_color(box2.filename, net.w, net.h);
float *X = calloc(net.w*net.h*net.c, sizeof(float));
memcpy(X, im1.data, im1.w*im1.h*im1.c*sizeof(float));
memcpy(X+im1.w*im1.h*im1.c, im2.data, im2.w*im2.h*im2.c*sizeof(float));
float *predictions = network_predict(net, X);
free_image(im1);
free_image(im2);
free(X);
if (predictions[class*2] > predictions[class*2+1]){
return 1;
}
return -1;
}
void bbox_update(sortable_bbox *a, sortable_bbox *b, int class, int result)
{
int k = 32;
float EA = 1./(1+pow(10, (b->elos[class] - a->elos[class])/400.));
float EB = 1./(1+pow(10, (a->elos[class] - b->elos[class])/400.));
float SA = result ? 1 : 0;
float SB = result ? 0 : 1;
a->elos[class] += k*(SA - EA);
b->elos[class] += k*(SB - EB);
}
void bbox_fight(network net, sortable_bbox *a, sortable_bbox *b, int classes, int class)
{
image im1 = load_image_color(a->filename, net.w, net.h);
image im2 = load_image_color(b->filename, net.w, net.h);
float *X = calloc(net.w*net.h*net.c, sizeof(float));
memcpy(X, im1.data, im1.w*im1.h*im1.c*sizeof(float));
memcpy(X+im1.w*im1.h*im1.c, im2.data, im2.w*im2.h*im2.c*sizeof(float));
float *predictions = network_predict(net, X);
++total_compares;
int i;
for(i = 0; i < classes; ++i){
if(class < 0 || class == i){
int result = predictions[i*2] > predictions[i*2+1];
bbox_update(a, b, i, result);
}
}
free_image(im1);
free_image(im2);
free(X);
}
void SortMaster3000(char *filename, char *weightfile)
{
int i = 0;
network net = parse_network_cfg(filename);
if(weightfile){
load_weights(&net, weightfile);
}
srand(time(0));
set_batch_network(&net, 1);
list *plist = get_paths("data/compare.sort.list");
//list *plist = get_paths("data/compare.val.old");
char **paths = (char **)list_to_array(plist);
int N = plist->size;
free_list(plist);
sortable_bbox *boxes = calloc(N, sizeof(sortable_bbox));
printf("Sorting %d boxes...\n", N);
for(i = 0; i < N; ++i){
boxes[i].filename = paths[i];
boxes[i].net = net;
boxes[i].class = 7;
boxes[i].elo = 1500;
}
clock_t time=clock();
qsort(boxes, N, sizeof(sortable_bbox), bbox_comparator);
for(i = 0; i < N; ++i){
printf("%s\n", boxes[i].filename);
}
printf("Sorted in %d compares, %f secs\n", total_compares, sec(clock()-time));
}
void BattleRoyaleWithCheese(char *filename, char *weightfile)
{
int classes = 20;
int i,j;
network net = parse_network_cfg(filename);
if(weightfile){
load_weights(&net, weightfile);
}
srand(time(0));
set_batch_network(&net, 1);
list *plist = get_paths("data/compare.sort.list");
//list *plist = get_paths("data/compare.small.list");
//list *plist = get_paths("data/compare.cat.list");
//list *plist = get_paths("data/compare.val.old");
char **paths = (char **)list_to_array(plist);
int N = plist->size;
int total = N;
free_list(plist);
sortable_bbox *boxes = calloc(N, sizeof(sortable_bbox));
printf("Battling %d boxes...\n", N);
for(i = 0; i < N; ++i){
boxes[i].filename = paths[i];
boxes[i].net = net;
boxes[i].classes = classes;
boxes[i].elos = calloc(classes, sizeof(float));;
for(j = 0; j < classes; ++j){
boxes[i].elos[j] = 1500;
}
}
int round;
clock_t time=clock();
for(round = 1; round <= 4; ++round){
clock_t round_time=clock();
printf("Round: %d\n", round);
shuffle(boxes, N, sizeof(sortable_bbox));
for(i = 0; i < N/2; ++i){
bbox_fight(net, boxes+i*2, boxes+i*2+1, classes, -1);
}
printf("Round: %f secs, %d remaining\n", sec(clock()-round_time), N);
}
int class;
for (class = 0; class < classes; ++class){
N = total;
current_class = class;
qsort(boxes, N, sizeof(sortable_bbox), elo_comparator);
N /= 2;
for(round = 1; round <= 100; ++round){
clock_t round_time=clock();
printf("Round: %d\n", round);
sorta_shuffle(boxes, N, sizeof(sortable_bbox), 10);
for(i = 0; i < N/2; ++i){
bbox_fight(net, boxes+i*2, boxes+i*2+1, classes, class);
}
qsort(boxes, N, sizeof(sortable_bbox), elo_comparator);
if(round <= 20) N = (N*9/10)/2*2;
printf("Round: %f secs, %d remaining\n", sec(clock()-round_time), N);
}
char buff[256];
sprintf(buff, "results/battle_%d.log", class);
FILE *outfp = fopen(buff, "w");
for(i = 0; i < N; ++i){
fprintf(outfp, "%s %f\n", boxes[i].filename, boxes[i].elos[class]);
}
fclose(outfp);
}
printf("Tournament in %d compares, %f secs\n", total_compares, sec(clock()-time));
}
void run_compare(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
//char *filename = (argc > 5) ? argv[5]: 0;
if(0==strcmp(argv[2], "train")) train_compare(cfg, weights);
else if(0==strcmp(argv[2], "valid")) validate_compare(cfg, weights);
else if(0==strcmp(argv[2], "sort")) SortMaster3000(cfg, weights);
else if(0==strcmp(argv[2], "battle")) BattleRoyaleWithCheese(cfg, weights);
/*
else if(0==strcmp(argv[2], "train")) train_coco(cfg, weights);
else if(0==strcmp(argv[2], "extract")) extract_boxes(cfg, weights);
else if(0==strcmp(argv[2], "valid")) validate_recall(cfg, weights);
*/
}

View File

@ -0,0 +1,336 @@
#include "connected_layer.h"
#include "convolutional_layer.h"
#include "batchnorm_layer.h"
#include "utils.h"
#include "cuda.h"
#include "blas.h"
#include "gemm.h"
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
layer make_connected_layer(int batch, int inputs, int outputs, ACTIVATION activation, int batch_normalize, int adam)
{
int i;
layer l = {0};
l.learning_rate_scale = 1;
l.type = CONNECTED;
l.inputs = inputs;
l.outputs = outputs;
l.batch=batch;
l.batch_normalize = batch_normalize;
l.h = 1;
l.w = 1;
l.c = inputs;
l.out_h = 1;
l.out_w = 1;
l.out_c = outputs;
l.output = calloc(batch*outputs, sizeof(float));
l.delta = calloc(batch*outputs, sizeof(float));
l.weight_updates = calloc(inputs*outputs, sizeof(float));
l.bias_updates = calloc(outputs, sizeof(float));
l.weights = calloc(outputs*inputs, sizeof(float));
l.biases = calloc(outputs, sizeof(float));
l.forward = forward_connected_layer;
l.backward = backward_connected_layer;
l.update = update_connected_layer;
//float scale = 1./sqrt(inputs);
float scale = sqrt(2./inputs);
for(i = 0; i < outputs*inputs; ++i){
l.weights[i] = scale*rand_uniform(-1, 1);
}
for(i = 0; i < outputs; ++i){
l.biases[i] = 0;
}
if(adam){
l.m = calloc(l.inputs*l.outputs, sizeof(float));
l.v = calloc(l.inputs*l.outputs, sizeof(float));
l.bias_m = calloc(l.outputs, sizeof(float));
l.scale_m = calloc(l.outputs, sizeof(float));
l.bias_v = calloc(l.outputs, sizeof(float));
l.scale_v = calloc(l.outputs, sizeof(float));
}
if(batch_normalize){
l.scales = calloc(outputs, sizeof(float));
l.scale_updates = calloc(outputs, sizeof(float));
for(i = 0; i < outputs; ++i){
l.scales[i] = 1;
}
l.mean = calloc(outputs, sizeof(float));
l.mean_delta = calloc(outputs, sizeof(float));
l.variance = calloc(outputs, sizeof(float));
l.variance_delta = calloc(outputs, sizeof(float));
l.rolling_mean = calloc(outputs, sizeof(float));
l.rolling_variance = calloc(outputs, sizeof(float));
l.x = calloc(batch*outputs, sizeof(float));
l.x_norm = calloc(batch*outputs, sizeof(float));
}
#ifdef GPU
l.forward_gpu = forward_connected_layer_gpu;
l.backward_gpu = backward_connected_layer_gpu;
l.update_gpu = update_connected_layer_gpu;
l.weights_gpu = cuda_make_array(l.weights, outputs*inputs);
l.biases_gpu = cuda_make_array(l.biases, outputs);
l.weight_updates_gpu = cuda_make_array(l.weight_updates, outputs*inputs);
l.bias_updates_gpu = cuda_make_array(l.bias_updates, outputs);
l.output_gpu = cuda_make_array(l.output, outputs*batch);
l.delta_gpu = cuda_make_array(l.delta, outputs*batch);
if (adam) {
l.m_gpu = cuda_make_array(0, inputs*outputs);
l.v_gpu = cuda_make_array(0, inputs*outputs);
l.bias_m_gpu = cuda_make_array(0, outputs);
l.bias_v_gpu = cuda_make_array(0, outputs);
l.scale_m_gpu = cuda_make_array(0, outputs);
l.scale_v_gpu = cuda_make_array(0, outputs);
}
if(batch_normalize){
l.mean_gpu = cuda_make_array(l.mean, outputs);
l.variance_gpu = cuda_make_array(l.variance, outputs);
l.rolling_mean_gpu = cuda_make_array(l.mean, outputs);
l.rolling_variance_gpu = cuda_make_array(l.variance, outputs);
l.mean_delta_gpu = cuda_make_array(l.mean, outputs);
l.variance_delta_gpu = cuda_make_array(l.variance, outputs);
l.scales_gpu = cuda_make_array(l.scales, outputs);
l.scale_updates_gpu = cuda_make_array(l.scale_updates, outputs);
l.x_gpu = cuda_make_array(l.output, l.batch*outputs);
l.x_norm_gpu = cuda_make_array(l.output, l.batch*outputs);
#ifdef CUDNN
cudnnCreateTensorDescriptor(&l.normTensorDesc);
cudnnCreateTensorDescriptor(&l.dstTensorDesc);
cudnnSetTensor4dDescriptor(l.dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, l.batch, l.out_c, l.out_h, l.out_w);
cudnnSetTensor4dDescriptor(l.normTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, 1, l.out_c, 1, 1);
#endif
}
#endif
l.activation = activation;
fprintf(stderr, "connected %4d -> %4d\n", inputs, outputs);
return l;
}
void update_connected_layer(layer l, update_args a)
{
float learning_rate = a.learning_rate*l.learning_rate_scale;
float momentum = a.momentum;
float decay = a.decay;
int batch = a.batch;
axpy_cpu(l.outputs, learning_rate/batch, l.bias_updates, 1, l.biases, 1);
scal_cpu(l.outputs, momentum, l.bias_updates, 1);
if(l.batch_normalize){
axpy_cpu(l.outputs, learning_rate/batch, l.scale_updates, 1, l.scales, 1);
scal_cpu(l.outputs, momentum, l.scale_updates, 1);
}
axpy_cpu(l.inputs*l.outputs, -decay*batch, l.weights, 1, l.weight_updates, 1);
axpy_cpu(l.inputs*l.outputs, learning_rate/batch, l.weight_updates, 1, l.weights, 1);
scal_cpu(l.inputs*l.outputs, momentum, l.weight_updates, 1);
}
void forward_connected_layer(layer l, network net)
{
fill_cpu(l.outputs*l.batch, 0, l.output, 1);
int m = l.batch;
int k = l.inputs;
int n = l.outputs;
float *a = net.input;
float *b = l.weights;
float *c = l.output;
gemm(0,1,m,n,k,1,a,k,b,k,1,c,n);
if(l.batch_normalize){
forward_batchnorm_layer(l, net);
} else {
add_bias(l.output, l.biases, l.batch, l.outputs, 1);
}
activate_array(l.output, l.outputs*l.batch, l.activation);
}
void backward_connected_layer(layer l, network net)
{
gradient_array(l.output, l.outputs*l.batch, l.activation, l.delta);
if(l.batch_normalize){
backward_batchnorm_layer(l, net);
} else {
backward_bias(l.bias_updates, l.delta, l.batch, l.outputs, 1);
}
int m = l.outputs;
int k = l.batch;
int n = l.inputs;
float *a = l.delta;
float *b = net.input;
float *c = l.weight_updates;
gemm(1,0,m,n,k,1,a,m,b,n,1,c,n);
m = l.batch;
k = l.outputs;
n = l.inputs;
a = l.delta;
b = l.weights;
c = net.delta;
if(c) gemm(0,0,m,n,k,1,a,k,b,n,1,c,n);
}
void denormalize_connected_layer(layer l)
{
int i, j;
for(i = 0; i < l.outputs; ++i){
float scale = l.scales[i]/sqrt(l.rolling_variance[i] + .000001);
for(j = 0; j < l.inputs; ++j){
l.weights[i*l.inputs + j] *= scale;
}
l.biases[i] -= l.rolling_mean[i] * scale;
l.scales[i] = 1;
l.rolling_mean[i] = 0;
l.rolling_variance[i] = 1;
}
}
void statistics_connected_layer(layer l)
{
if(l.batch_normalize){
printf("Scales ");
print_statistics(l.scales, l.outputs);
/*
printf("Rolling Mean ");
print_statistics(l.rolling_mean, l.outputs);
printf("Rolling Variance ");
print_statistics(l.rolling_variance, l.outputs);
*/
}
printf("Biases ");
print_statistics(l.biases, l.outputs);
printf("Weights ");
print_statistics(l.weights, l.outputs);
}
#ifdef GPU
void pull_connected_layer(layer l)
{
cuda_pull_array(l.weights_gpu, l.weights, l.inputs*l.outputs);
cuda_pull_array(l.biases_gpu, l.biases, l.outputs);
cuda_pull_array(l.weight_updates_gpu, l.weight_updates, l.inputs*l.outputs);
cuda_pull_array(l.bias_updates_gpu, l.bias_updates, l.outputs);
if (l.batch_normalize){
cuda_pull_array(l.scales_gpu, l.scales, l.outputs);
cuda_pull_array(l.rolling_mean_gpu, l.rolling_mean, l.outputs);
cuda_pull_array(l.rolling_variance_gpu, l.rolling_variance, l.outputs);
}
}
void push_connected_layer(layer l)
{
cuda_push_array(l.weights_gpu, l.weights, l.inputs*l.outputs);
cuda_push_array(l.biases_gpu, l.biases, l.outputs);
cuda_push_array(l.weight_updates_gpu, l.weight_updates, l.inputs*l.outputs);
cuda_push_array(l.bias_updates_gpu, l.bias_updates, l.outputs);
if (l.batch_normalize){
cuda_push_array(l.scales_gpu, l.scales, l.outputs);
cuda_push_array(l.rolling_mean_gpu, l.rolling_mean, l.outputs);
cuda_push_array(l.rolling_variance_gpu, l.rolling_variance, l.outputs);
}
}
void update_connected_layer_gpu(layer l, update_args a)
{
float learning_rate = a.learning_rate*l.learning_rate_scale;
float momentum = a.momentum;
float decay = a.decay;
int batch = a.batch;
if(a.adam){
adam_update_gpu(l.weights_gpu, l.weight_updates_gpu, l.m_gpu, l.v_gpu, a.B1, a.B2, a.eps, decay, learning_rate, l.inputs*l.outputs, batch, a.t);
adam_update_gpu(l.biases_gpu, l.bias_updates_gpu, l.bias_m_gpu, l.bias_v_gpu, a.B1, a.B2, a.eps, decay, learning_rate, l.outputs, batch, a.t);
if(l.scales_gpu){
adam_update_gpu(l.scales_gpu, l.scale_updates_gpu, l.scale_m_gpu, l.scale_v_gpu, a.B1, a.B2, a.eps, decay, learning_rate, l.outputs, batch, a.t);
}
}else{
axpy_gpu(l.outputs, learning_rate/batch, l.bias_updates_gpu, 1, l.biases_gpu, 1);
scal_gpu(l.outputs, momentum, l.bias_updates_gpu, 1);
if(l.batch_normalize){
axpy_gpu(l.outputs, learning_rate/batch, l.scale_updates_gpu, 1, l.scales_gpu, 1);
scal_gpu(l.outputs, momentum, l.scale_updates_gpu, 1);
}
axpy_gpu(l.inputs*l.outputs, -decay*batch, l.weights_gpu, 1, l.weight_updates_gpu, 1);
axpy_gpu(l.inputs*l.outputs, learning_rate/batch, l.weight_updates_gpu, 1, l.weights_gpu, 1);
scal_gpu(l.inputs*l.outputs, momentum, l.weight_updates_gpu, 1);
}
}
void forward_connected_layer_gpu(layer l, network net)
{
fill_gpu(l.outputs*l.batch, 0, l.output_gpu, 1);
int m = l.batch;
int k = l.inputs;
int n = l.outputs;
float * a = net.input_gpu;
float * b = l.weights_gpu;
float * c = l.output_gpu;
gemm_gpu(0,1,m,n,k,1,a,k,b,k,1,c,n);
if (l.batch_normalize) {
forward_batchnorm_layer_gpu(l, net);
} else {
add_bias_gpu(l.output_gpu, l.biases_gpu, l.batch, l.outputs, 1);
}
activate_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation);
}
void backward_connected_layer_gpu(layer l, network net)
{
constrain_gpu(l.outputs*l.batch, 1, l.delta_gpu, 1);
gradient_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation, l.delta_gpu);
if(l.batch_normalize){
backward_batchnorm_layer_gpu(l, net);
} else {
backward_bias_gpu(l.bias_updates_gpu, l.delta_gpu, l.batch, l.outputs, 1);
}
int m = l.outputs;
int k = l.batch;
int n = l.inputs;
float * a = l.delta_gpu;
float * b = net.input_gpu;
float * c = l.weight_updates_gpu;
gemm_gpu(1,0,m,n,k,1,a,m,b,n,1,c,n);
m = l.batch;
k = l.outputs;
n = l.inputs;
a = l.delta_gpu;
b = l.weights_gpu;
c = net.delta_gpu;
if(c) gemm_gpu(0,0,m,n,k,1,a,k,b,n,1,c,n);
}
#endif

View File

@ -0,0 +1,23 @@
#ifndef CONNECTED_LAYER_H
#define CONNECTED_LAYER_H
#include "activations.h"
#include "layer.h"
#include "network.h"
layer make_connected_layer(int batch, int inputs, int outputs, ACTIVATION activation, int batch_normalize, int adam);
void forward_connected_layer(layer l, network net);
void backward_connected_layer(layer l, network net);
void update_connected_layer(layer l, update_args a);
#ifdef GPU
void forward_connected_layer_gpu(layer l, network net);
void backward_connected_layer_gpu(layer l, network net);
void update_connected_layer_gpu(layer l, update_args a);
void push_connected_layer(layer l);
void pull_connected_layer(layer l);
#endif
#endif

View File

@ -0,0 +1,328 @@
#include "cuda_runtime.h"
#include "curand.h"
#include "cublas_v2.h"
#include "convolutional_layer.h"
#include "batchnorm_layer.h"
#include "gemm.h"
#include "blas.h"
#include "im2col.h"
#include "col2im.h"
#include "utils.h"
#include "cuda.h"
__global__ void binarize_kernel(float *x, int n, float *binary)
{
int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;
if (i >= n) return;
binary[i] = (x[i] >= 0) ? 1 : -1;
}
void binarize_gpu(float *x, int n, float *binary)
{
binarize_kernel<<<cuda_gridsize(n), BLOCK>>>(x, n, binary);
check_error(cudaPeekAtLastError());
}
__global__ void binarize_input_kernel(float *input, int n, int size, float *binary)
{
int s = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;
if (s >= size) return;
int i = 0;
float mean = 0;
for(i = 0; i < n; ++i){
mean += fabsf(input[i*size + s]);
}
mean = mean / n;
for(i = 0; i < n; ++i){
binary[i*size + s] = (input[i*size + s] > 0) ? mean : -mean;
}
}
void binarize_input_gpu(float *input, int n, int size, float *binary)
{
binarize_input_kernel<<<cuda_gridsize(size), BLOCK>>>(input, n, size, binary);
check_error(cudaPeekAtLastError());
}
__global__ void binarize_weights_kernel(float *weights, int n, int size, float *binary)
{
int f = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;
if (f >= n) return;
int i = 0;
float mean = 0;
for(i = 0; i < size; ++i){
mean += fabsf(weights[f*size + i]);
}
mean = mean / size;
for(i = 0; i < size; ++i){
binary[f*size + i] = (weights[f*size + i] > 0) ? mean : -mean;
//binary[f*size + i] = weights[f*size + i];
}
}
void binarize_weights_gpu(float *weights, int n, int size, float *binary)
{
binarize_weights_kernel<<<cuda_gridsize(n), BLOCK>>>(weights, n, size, binary);
check_error(cudaPeekAtLastError());
}
void forward_convolutional_layer_gpu(convolutional_layer l, network net)
{
fill_gpu(l.outputs*l.batch, 0, l.output_gpu, 1);
if(l.binary){
binarize_weights_gpu(l.weights_gpu, l.n, l.c/l.groups*l.size*l.size, l.binary_weights_gpu);
swap_binary(&l);
}
if(l.xnor){
binarize_weights_gpu(l.weights_gpu, l.n, l.c/l.groups*l.size*l.size, l.binary_weights_gpu);
swap_binary(&l);
binarize_gpu(net.input_gpu, l.c*l.h*l.w*l.batch, l.binary_input_gpu);
net.input_gpu = l.binary_input_gpu;
}
#ifdef CUDNN
float one = 1;
cudnnConvolutionForward(cudnn_handle(),
&one,
l.srcTensorDesc,
net.input_gpu,
l.weightDesc,
l.weights_gpu,
l.convDesc,
l.fw_algo,
net.workspace,
l.workspace_size,
&one,
l.dstTensorDesc,
l.output_gpu);
#else
int i, j;
int m = l.n/l.groups;
int k = l.size*l.size*l.c/l.groups;
int n = l.out_w*l.out_h;
for(i = 0; i < l.batch; ++i){
for(j = 0; j < l.groups; ++j){
float *a = l.weights_gpu + j*l.nweights/l.groups;
float *b = net.workspace;
float *c = l.output_gpu + (i*l.groups + j)*n*m;
float *im = net.input_gpu + (i*l.groups + j)*l.c/l.groups*l.h*l.w;
if (l.size == 1){
b = im;
} else {
im2col_gpu(im, l.c/l.groups, l.h, l.w, l.size, l.stride, l.pad, b);
}
gemm_gpu(0,0,m,n,k,1,a,k,b,n,1,c,n);
}
}
#endif
if (l.batch_normalize) {
forward_batchnorm_layer_gpu(l, net);
} else {
add_bias_gpu(l.output_gpu, l.biases_gpu, l.batch, l.n, l.out_w*l.out_h);
}
activate_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation);
//if(l.dot > 0) dot_error_gpu(l);
if(l.binary || l.xnor) swap_binary(&l);
}
__global__ void smooth_kernel(float *x, int n, int w, int h, int c, int size, float rate, float *delta)
{
int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;
if(id >= n) return;
int j = id % w;
id /= w;
int i = id % h;
id /= h;
int k = id % c;
id /= c;
int b = id;
int w_offset = -(size/2.f);
int h_offset = -(size/2.f);
int out_index = j + w*(i + h*(k + c*b));
int l, m;
for(l = 0; l < size; ++l){
for(m = 0; m < size; ++m){
int cur_h = h_offset + i + l;
int cur_w = w_offset + j + m;
int index = cur_w + w*(cur_h + h*(k + b*c));
int valid = (cur_h >= 0 && cur_h < h &&
cur_w >= 0 && cur_w < w);
delta[out_index] += valid ? rate*(x[index] - x[out_index]) : 0;
}
}
}
void smooth_layer(layer l, int size, float rate)
{
int h = l.out_h;
int w = l.out_w;
int c = l.out_c;
size_t n = h*w*c*l.batch;
smooth_kernel<<<cuda_gridsize(n), BLOCK>>>(l.output_gpu, n, l.w, l.h, l.c, size, rate, l.delta_gpu);
check_error(cudaPeekAtLastError());
}
void backward_convolutional_layer_gpu(convolutional_layer l, network net)
{
if(l.smooth){
smooth_layer(l, 5, l.smooth);
}
//constrain_gpu(l.outputs*l.batch, 1, l.delta_gpu, 1);
gradient_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation, l.delta_gpu);
if(l.batch_normalize){
backward_batchnorm_layer_gpu(l, net);
} else {
backward_bias_gpu(l.bias_updates_gpu, l.delta_gpu, l.batch, l.n, l.out_w*l.out_h);
}
float *original_input = net.input_gpu;
if(l.xnor) net.input_gpu = l.binary_input_gpu;
#ifdef CUDNN
float one = 1;
cudnnConvolutionBackwardFilter(cudnn_handle(),
&one,
l.srcTensorDesc,
net.input_gpu,
l.ddstTensorDesc,
l.delta_gpu,
l.convDesc,
l.bf_algo,
net.workspace,
l.workspace_size,
&one,
l.dweightDesc,
l.weight_updates_gpu);
if(net.delta_gpu){
if(l.binary || l.xnor) swap_binary(&l);
cudnnConvolutionBackwardData(cudnn_handle(),
&one,
l.weightDesc,
l.weights_gpu,
l.ddstTensorDesc,
l.delta_gpu,
l.convDesc,
l.bd_algo,
net.workspace,
l.workspace_size,
&one,
l.dsrcTensorDesc,
net.delta_gpu);
if(l.binary || l.xnor) swap_binary(&l);
if(l.xnor) gradient_array_gpu(original_input, l.batch*l.c*l.h*l.w, HARDTAN, net.delta_gpu);
}
#else
int m = l.n/l.groups;
int n = l.size*l.size*l.c/l.groups;
int k = l.out_w*l.out_h;
int i, j;
for(i = 0; i < l.batch; ++i){
for(j = 0; j < l.groups; ++j){
float *a = l.delta_gpu + (i*l.groups + j)*m*k;
float *b = net.workspace;
float *c = l.weight_updates_gpu + j*l.nweights/l.groups;
float *im = net.input_gpu+(i*l.groups + j)*l.c/l.groups*l.h*l.w;
float *imd = net.delta_gpu+(i*l.groups + j)*l.c/l.groups*l.h*l.w;
im2col_gpu(im, l.c/l.groups, l.h, l.w, l.size, l.stride, l.pad, b);
gemm_gpu(0,1,m,n,k,1,a,k,b,k,1,c,n);
if (net.delta_gpu) {
if (l.binary || l.xnor) swap_binary(&l);
a = l.weights_gpu + j*l.nweights/l.groups;
b = l.delta_gpu + (i*l.groups + j)*m*k;
c = net.workspace;
if (l.size == 1) {
c = imd;
}
gemm_gpu(1,0,n,k,m,1,a,n,b,k,0,c,k);
if (l.size != 1) {
col2im_gpu(net.workspace, l.c/l.groups, l.h, l.w, l.size, l.stride, l.pad, imd);
}
if(l.binary || l.xnor) {
swap_binary(&l);
}
}
if(l.xnor) gradient_array_gpu(original_input + i*l.c*l.h*l.w, l.c*l.h*l.w, HARDTAN, net.delta_gpu + i*l.c*l.h*l.w);
}
}
#endif
}
void pull_convolutional_layer(layer l)
{
cuda_pull_array(l.weights_gpu, l.weights, l.nweights);
cuda_pull_array(l.biases_gpu, l.biases, l.n);
cuda_pull_array(l.weight_updates_gpu, l.weight_updates, l.nweights);
cuda_pull_array(l.bias_updates_gpu, l.bias_updates, l.n);
if (l.batch_normalize){
cuda_pull_array(l.scales_gpu, l.scales, l.n);
cuda_pull_array(l.rolling_mean_gpu, l.rolling_mean, l.n);
cuda_pull_array(l.rolling_variance_gpu, l.rolling_variance, l.n);
}
}
void push_convolutional_layer(layer l)
{
cuda_push_array(l.weights_gpu, l.weights, l.nweights);
cuda_push_array(l.biases_gpu, l.biases, l.n);
cuda_push_array(l.weight_updates_gpu, l.weight_updates, l.nweights);
cuda_push_array(l.bias_updates_gpu, l.bias_updates, l.n);
if (l.batch_normalize){
cuda_push_array(l.scales_gpu, l.scales, l.n);
cuda_push_array(l.rolling_mean_gpu, l.rolling_mean, l.n);
cuda_push_array(l.rolling_variance_gpu, l.rolling_variance, l.n);
}
}
void update_convolutional_layer_gpu(layer l, update_args a)
{
float learning_rate = a.learning_rate*l.learning_rate_scale;
float momentum = a.momentum;
float decay = a.decay;
int batch = a.batch;
if(a.adam){
adam_update_gpu(l.weights_gpu, l.weight_updates_gpu, l.m_gpu, l.v_gpu, a.B1, a.B2, a.eps, decay, learning_rate, l.nweights, batch, a.t);
adam_update_gpu(l.biases_gpu, l.bias_updates_gpu, l.bias_m_gpu, l.bias_v_gpu, a.B1, a.B2, a.eps, decay, learning_rate, l.n, batch, a.t);
if(l.scales_gpu){
adam_update_gpu(l.scales_gpu, l.scale_updates_gpu, l.scale_m_gpu, l.scale_v_gpu, a.B1, a.B2, a.eps, decay, learning_rate, l.n, batch, a.t);
}
}else{
axpy_gpu(l.nweights, -decay*batch, l.weights_gpu, 1, l.weight_updates_gpu, 1);
axpy_gpu(l.nweights, learning_rate/batch, l.weight_updates_gpu, 1, l.weights_gpu, 1);
scal_gpu(l.nweights, momentum, l.weight_updates_gpu, 1);
axpy_gpu(l.n, learning_rate/batch, l.bias_updates_gpu, 1, l.biases_gpu, 1);
scal_gpu(l.n, momentum, l.bias_updates_gpu, 1);
if(l.scales_gpu){
axpy_gpu(l.n, learning_rate/batch, l.scale_updates_gpu, 1, l.scales_gpu, 1);
scal_gpu(l.n, momentum, l.scale_updates_gpu, 1);
}
}
if(l.clip){
constrain_gpu(l.nweights, l.clip, l.weights_gpu, 1);
}
}

View File

@ -0,0 +1,622 @@
#include "convolutional_layer.h"
#include "utils.h"
#include "batchnorm_layer.h"
#include "im2col.h"
#include "col2im.h"
#include "blas.h"
#include "gemm.h"
#include <stdio.h>
#include <time.h>
#ifdef AI2
#include "xnor_layer.h"
#endif
void swap_binary(convolutional_layer *l)
{
float *swap = l->weights;
l->weights = l->binary_weights;
l->binary_weights = swap;
#ifdef GPU
swap = l->weights_gpu;
l->weights_gpu = l->binary_weights_gpu;
l->binary_weights_gpu = swap;
#endif
}
void binarize_weights(float *weights, int n, int size, float *binary)
{
int i, f;
for(f = 0; f < n; ++f){
float mean = 0;
for(i = 0; i < size; ++i){
mean += fabs(weights[f*size + i]);
}
mean = mean / size;
for(i = 0; i < size; ++i){
binary[f*size + i] = (weights[f*size + i] > 0) ? mean : -mean;
}
}
}
void binarize_cpu(float *input, int n, float *binary)
{
int i;
for(i = 0; i < n; ++i){
binary[i] = (input[i] > 0) ? 1 : -1;
}
}
void binarize_input(float *input, int n, int size, float *binary)
{
int i, s;
for(s = 0; s < size; ++s){
float mean = 0;
for(i = 0; i < n; ++i){
mean += fabs(input[i*size + s]);
}
mean = mean / n;
for(i = 0; i < n; ++i){
binary[i*size + s] = (input[i*size + s] > 0) ? mean : -mean;
}
}
}
int convolutional_out_height(convolutional_layer l)
{
return (l.h + 2*l.pad - l.size) / l.stride + 1;
}
int convolutional_out_width(convolutional_layer l)
{
return (l.w + 2*l.pad - l.size) / l.stride + 1;
}
image get_convolutional_image(convolutional_layer l)
{
return float_to_image(l.out_w,l.out_h,l.out_c,l.output);
}
image get_convolutional_delta(convolutional_layer l)
{
return float_to_image(l.out_w,l.out_h,l.out_c,l.delta);
}
static size_t get_workspace_size(layer l){
#ifdef CUDNN
if(gpu_index >= 0){
size_t most = 0;
size_t s = 0;
cudnnGetConvolutionForwardWorkspaceSize(cudnn_handle(),
l.srcTensorDesc,
l.weightDesc,
l.convDesc,
l.dstTensorDesc,
l.fw_algo,
&s);
if (s > most) most = s;
cudnnGetConvolutionBackwardFilterWorkspaceSize(cudnn_handle(),
l.srcTensorDesc,
l.ddstTensorDesc,
l.convDesc,
l.dweightDesc,
l.bf_algo,
&s);
if (s > most) most = s;
cudnnGetConvolutionBackwardDataWorkspaceSize(cudnn_handle(),
l.weightDesc,
l.ddstTensorDesc,
l.convDesc,
l.dsrcTensorDesc,
l.bd_algo,
&s);
if (s > most) most = s;
return most;
}
#endif
return (size_t)l.out_h*l.out_w*l.size*l.size*l.c/l.groups*sizeof(float);
}
#ifdef GPU
#ifdef CUDNN
void cudnn_convolutional_setup(layer *l)
{
cudnnSetTensor4dDescriptor(l->dsrcTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, l->batch, l->c, l->h, l->w);
cudnnSetTensor4dDescriptor(l->ddstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, l->batch, l->out_c, l->out_h, l->out_w);
cudnnSetTensor4dDescriptor(l->srcTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, l->batch, l->c, l->h, l->w);
cudnnSetTensor4dDescriptor(l->dstTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, l->batch, l->out_c, l->out_h, l->out_w);
cudnnSetTensor4dDescriptor(l->normTensorDesc, CUDNN_TENSOR_NCHW, CUDNN_DATA_FLOAT, 1, l->out_c, 1, 1);
cudnnSetFilter4dDescriptor(l->dweightDesc, CUDNN_DATA_FLOAT, CUDNN_TENSOR_NCHW, l->n, l->c/l->groups, l->size, l->size);
cudnnSetFilter4dDescriptor(l->weightDesc, CUDNN_DATA_FLOAT, CUDNN_TENSOR_NCHW, l->n, l->c/l->groups, l->size, l->size);
#if CUDNN_MAJOR >= 6
cudnnSetConvolution2dDescriptor(l->convDesc, l->pad, l->pad, l->stride, l->stride, 1, 1, CUDNN_CROSS_CORRELATION, CUDNN_DATA_FLOAT);
#else
cudnnSetConvolution2dDescriptor(l->convDesc, l->pad, l->pad, l->stride, l->stride, 1, 1, CUDNN_CROSS_CORRELATION);
#endif
#if CUDNN_MAJOR >= 7
cudnnSetConvolutionGroupCount(l->convDesc, l->groups);
#else
if(l->groups > 1){
error("CUDNN < 7 doesn't support groups, please upgrade!");
}
#endif
cudnnGetConvolutionForwardAlgorithm(cudnn_handle(),
l->srcTensorDesc,
l->weightDesc,
l->convDesc,
l->dstTensorDesc,
CUDNN_CONVOLUTION_FWD_SPECIFY_WORKSPACE_LIMIT,
2000000000,
&l->fw_algo);
cudnnGetConvolutionBackwardDataAlgorithm(cudnn_handle(),
l->weightDesc,
l->ddstTensorDesc,
l->convDesc,
l->dsrcTensorDesc,
CUDNN_CONVOLUTION_BWD_DATA_SPECIFY_WORKSPACE_LIMIT,
2000000000,
&l->bd_algo);
cudnnGetConvolutionBackwardFilterAlgorithm(cudnn_handle(),
l->srcTensorDesc,
l->ddstTensorDesc,
l->convDesc,
l->dweightDesc,
CUDNN_CONVOLUTION_BWD_FILTER_SPECIFY_WORKSPACE_LIMIT,
2000000000,
&l->bf_algo);
}
#endif
#endif
convolutional_layer make_convolutional_layer(int batch, int h, int w, int c, int n, int groups, int size, int stride, int padding, ACTIVATION activation, int batch_normalize, int binary, int xnor, int adam)
{
int i;
convolutional_layer l = {0};
l.type = CONVOLUTIONAL;
l.groups = groups;
l.h = h;
l.w = w;
l.c = c;
l.n = n;
l.binary = binary;
l.xnor = xnor;
l.batch = batch;
l.stride = stride;
l.size = size;
l.pad = padding;
l.batch_normalize = batch_normalize;
l.weights = calloc(c/groups*n*size*size, sizeof(float));
l.weight_updates = calloc(c/groups*n*size*size, sizeof(float));
l.biases = calloc(n, sizeof(float));
l.bias_updates = calloc(n, sizeof(float));
l.nweights = c/groups*n*size*size;
l.nbiases = n;
// float scale = 1./sqrt(size*size*c);
float scale = sqrt(2./(size*size*c/l.groups));
//printf("convscale %f\n", scale);
//scale = .02;
//for(i = 0; i < c*n*size*size; ++i) l.weights[i] = scale*rand_uniform(-1, 1);
for(i = 0; i < l.nweights; ++i) l.weights[i] = scale*rand_normal();
int out_w = convolutional_out_width(l);
int out_h = convolutional_out_height(l);
l.out_h = out_h;
l.out_w = out_w;
l.out_c = n;
l.outputs = l.out_h * l.out_w * l.out_c;
l.inputs = l.w * l.h * l.c;
l.output = calloc(l.batch*l.outputs, sizeof(float));
l.delta = calloc(l.batch*l.outputs, sizeof(float));
l.forward = forward_convolutional_layer;
l.backward = backward_convolutional_layer;
l.update = update_convolutional_layer;
if(binary){
l.binary_weights = calloc(l.nweights, sizeof(float));
l.cweights = calloc(l.nweights, sizeof(char));
l.scales = calloc(n, sizeof(float));
}
if(xnor){
l.binary_weights = calloc(l.nweights, sizeof(float));
l.binary_input = calloc(l.inputs*l.batch, sizeof(float));
}
if(batch_normalize){
l.scales = calloc(n, sizeof(float));
l.scale_updates = calloc(n, sizeof(float));
for(i = 0; i < n; ++i){
l.scales[i] = 1;
}
l.mean = calloc(n, sizeof(float));
l.variance = calloc(n, sizeof(float));
l.mean_delta = calloc(n, sizeof(float));
l.variance_delta = calloc(n, sizeof(float));
l.rolling_mean = calloc(n, sizeof(float));
l.rolling_variance = calloc(n, sizeof(float));
l.x = calloc(l.batch*l.outputs, sizeof(float));
l.x_norm = calloc(l.batch*l.outputs, sizeof(float));
}
if(adam){
l.m = calloc(l.nweights, sizeof(float));
l.v = calloc(l.nweights, sizeof(float));
l.bias_m = calloc(n, sizeof(float));
l.scale_m = calloc(n, sizeof(float));
l.bias_v = calloc(n, sizeof(float));
l.scale_v = calloc(n, sizeof(float));
}
#ifdef GPU
l.forward_gpu = forward_convolutional_layer_gpu;
l.backward_gpu = backward_convolutional_layer_gpu;
l.update_gpu = update_convolutional_layer_gpu;
if(gpu_index >= 0){
if (adam) {
l.m_gpu = cuda_make_array(l.m, l.nweights);
l.v_gpu = cuda_make_array(l.v, l.nweights);
l.bias_m_gpu = cuda_make_array(l.bias_m, n);
l.bias_v_gpu = cuda_make_array(l.bias_v, n);
l.scale_m_gpu = cuda_make_array(l.scale_m, n);
l.scale_v_gpu = cuda_make_array(l.scale_v, n);
}
l.weights_gpu = cuda_make_array(l.weights, l.nweights);
l.weight_updates_gpu = cuda_make_array(l.weight_updates, l.nweights);
l.biases_gpu = cuda_make_array(l.biases, n);
l.bias_updates_gpu = cuda_make_array(l.bias_updates, n);
l.delta_gpu = cuda_make_array(l.delta, l.batch*out_h*out_w*n);
l.output_gpu = cuda_make_array(l.output, l.batch*out_h*out_w*n);
if(binary){
l.binary_weights_gpu = cuda_make_array(l.weights, l.nweights);
}
if(xnor){
l.binary_weights_gpu = cuda_make_array(l.weights, l.nweights);
l.binary_input_gpu = cuda_make_array(0, l.inputs*l.batch);
}
if(batch_normalize){
l.mean_gpu = cuda_make_array(l.mean, n);
l.variance_gpu = cuda_make_array(l.variance, n);
l.rolling_mean_gpu = cuda_make_array(l.mean, n);
l.rolling_variance_gpu = cuda_make_array(l.variance, n);
l.mean_delta_gpu = cuda_make_array(l.mean, n);
l.variance_delta_gpu = cuda_make_array(l.variance, n);
l.scales_gpu = cuda_make_array(l.scales, n);
l.scale_updates_gpu = cuda_make_array(l.scale_updates, n);
l.x_gpu = cuda_make_array(l.output, l.batch*out_h*out_w*n);
l.x_norm_gpu = cuda_make_array(l.output, l.batch*out_h*out_w*n);
}
#ifdef CUDNN
cudnnCreateTensorDescriptor(&l.normTensorDesc);
cudnnCreateTensorDescriptor(&l.srcTensorDesc);
cudnnCreateTensorDescriptor(&l.dstTensorDesc);
cudnnCreateFilterDescriptor(&l.weightDesc);
cudnnCreateTensorDescriptor(&l.dsrcTensorDesc);
cudnnCreateTensorDescriptor(&l.ddstTensorDesc);
cudnnCreateFilterDescriptor(&l.dweightDesc);
cudnnCreateConvolutionDescriptor(&l.convDesc);
cudnn_convolutional_setup(&l);
#endif
}
#endif
l.workspace_size = get_workspace_size(l);
l.activation = activation;
fprintf(stderr, "conv %5d %2d x%2d /%2d %4d x%4d x%4d -> %4d x%4d x%4d %5.3f BFLOPs\n", n, size, size, stride, w, h, c, l.out_w, l.out_h, l.out_c, (2.0 * l.n * l.size*l.size*l.c/l.groups * l.out_h*l.out_w)/1000000000.);
return l;
}
void denormalize_convolutional_layer(convolutional_layer l)
{
int i, j;
for(i = 0; i < l.n; ++i){
float scale = l.scales[i]/sqrt(l.rolling_variance[i] + .00001);
for(j = 0; j < l.c/l.groups*l.size*l.size; ++j){
l.weights[i*l.c/l.groups*l.size*l.size + j] *= scale;
}
l.biases[i] -= l.rolling_mean[i] * scale;
l.scales[i] = 1;
l.rolling_mean[i] = 0;
l.rolling_variance[i] = 1;
}
}
/*
void test_convolutional_layer()
{
convolutional_layer l = make_convolutional_layer(1, 5, 5, 3, 2, 5, 2, 1, LEAKY, 1, 0, 0, 0);
l.batch_normalize = 1;
float data[] = {1,1,1,1,1,
1,1,1,1,1,
1,1,1,1,1,
1,1,1,1,1,
1,1,1,1,1,
2,2,2,2,2,
2,2,2,2,2,
2,2,2,2,2,
2,2,2,2,2,
2,2,2,2,2,
3,3,3,3,3,
3,3,3,3,3,
3,3,3,3,3,
3,3,3,3,3,
3,3,3,3,3};
//net.input = data;
//forward_convolutional_layer(l);
}
*/
void resize_convolutional_layer(convolutional_layer *l, int w, int h)
{
l->w = w;
l->h = h;
int out_w = convolutional_out_width(*l);
int out_h = convolutional_out_height(*l);
l->out_w = out_w;
l->out_h = out_h;
l->outputs = l->out_h * l->out_w * l->out_c;
l->inputs = l->w * l->h * l->c;
l->output = realloc(l->output, l->batch*l->outputs*sizeof(float));
l->delta = realloc(l->delta, l->batch*l->outputs*sizeof(float));
if(l->batch_normalize){
l->x = realloc(l->x, l->batch*l->outputs*sizeof(float));
l->x_norm = realloc(l->x_norm, l->batch*l->outputs*sizeof(float));
}
#ifdef GPU
cuda_free(l->delta_gpu);
cuda_free(l->output_gpu);
l->delta_gpu = cuda_make_array(l->delta, l->batch*l->outputs);
l->output_gpu = cuda_make_array(l->output, l->batch*l->outputs);
if(l->batch_normalize){
cuda_free(l->x_gpu);
cuda_free(l->x_norm_gpu);
l->x_gpu = cuda_make_array(l->output, l->batch*l->outputs);
l->x_norm_gpu = cuda_make_array(l->output, l->batch*l->outputs);
}
#ifdef CUDNN
cudnn_convolutional_setup(l);
#endif
#endif
l->workspace_size = get_workspace_size(*l);
}
void add_bias(float *output, float *biases, int batch, int n, int size)
{
int i,j,b;
for(b = 0; b < batch; ++b){
for(i = 0; i < n; ++i){
for(j = 0; j < size; ++j){
output[(b*n + i)*size + j] += biases[i];
}
}
}
}
void scale_bias(float *output, float *scales, int batch, int n, int size)
{
int i,j,b;
for(b = 0; b < batch; ++b){
for(i = 0; i < n; ++i){
for(j = 0; j < size; ++j){
output[(b*n + i)*size + j] *= scales[i];
}
}
}
}
void backward_bias(float *bias_updates, float *delta, int batch, int n, int size)
{
int i,b;
for(b = 0; b < batch; ++b){
for(i = 0; i < n; ++i){
bias_updates[i] += sum_array(delta+size*(i+b*n), size);
}
}
}
void forward_convolutional_layer(convolutional_layer l, network net)
{
int i, j;
fill_cpu(l.outputs*l.batch, 0, l.output, 1);
if(l.xnor){
binarize_weights(l.weights, l.n, l.c/l.groups*l.size*l.size, l.binary_weights);
swap_binary(&l);
binarize_cpu(net.input, l.c*l.h*l.w*l.batch, l.binary_input);
net.input = l.binary_input;
}
int m = l.n/l.groups;
int k = l.size*l.size*l.c/l.groups;
int n = l.out_w*l.out_h;
for(i = 0; i < l.batch; ++i){
for(j = 0; j < l.groups; ++j){
float *a = l.weights + j*l.nweights/l.groups;
float *b = net.workspace;
float *c = l.output + (i*l.groups + j)*n*m;
float *im = net.input + (i*l.groups + j)*l.c/l.groups*l.h*l.w;
if (l.size == 1) {
b = im;
} else {
im2col_cpu(im, l.c/l.groups, l.h, l.w, l.size, l.stride, l.pad, b);
}
gemm(0,0,m,n,k,1,a,k,b,n,1,c,n);
}
}
if(l.batch_normalize){
forward_batchnorm_layer(l, net);
} else {
add_bias(l.output, l.biases, l.batch, l.n, l.out_h*l.out_w);
}
activate_array(l.output, l.outputs*l.batch, l.activation);
if(l.binary || l.xnor) swap_binary(&l);
}
void backward_convolutional_layer(convolutional_layer l, network net)
{
int i, j;
int m = l.n/l.groups;
int n = l.size*l.size*l.c/l.groups;
int k = l.out_w*l.out_h;
gradient_array(l.output, l.outputs*l.batch, l.activation, l.delta);
if(l.batch_normalize){
backward_batchnorm_layer(l, net);
} else {
backward_bias(l.bias_updates, l.delta, l.batch, l.n, k);
}
for(i = 0; i < l.batch; ++i){
for(j = 0; j < l.groups; ++j){
float *a = l.delta + (i*l.groups + j)*m*k;
float *b = net.workspace;
float *c = l.weight_updates + j*l.nweights/l.groups;
float *im = net.input + (i*l.groups + j)*l.c/l.groups*l.h*l.w;
float *imd = net.delta + (i*l.groups + j)*l.c/l.groups*l.h*l.w;
if(l.size == 1){
b = im;
} else {
im2col_cpu(im, l.c/l.groups, l.h, l.w,
l.size, l.stride, l.pad, b);
}
gemm(0,1,m,n,k,1,a,k,b,k,1,c,n);
if (net.delta) {
a = l.weights + j*l.nweights/l.groups;
b = l.delta + (i*l.groups + j)*m*k;
c = net.workspace;
if (l.size == 1) {
c = imd;
}
gemm(1,0,n,k,m,1,a,n,b,k,0,c,k);
if (l.size != 1) {
col2im_cpu(net.workspace, l.c/l.groups, l.h, l.w, l.size, l.stride, l.pad, imd);
}
}
}
}
}
void update_convolutional_layer(convolutional_layer l, update_args a)
{
float learning_rate = a.learning_rate*l.learning_rate_scale;
float momentum = a.momentum;
float decay = a.decay;
int batch = a.batch;
axpy_cpu(l.n, learning_rate/batch, l.bias_updates, 1, l.biases, 1);
scal_cpu(l.n, momentum, l.bias_updates, 1);
if(l.scales){
axpy_cpu(l.n, learning_rate/batch, l.scale_updates, 1, l.scales, 1);
scal_cpu(l.n, momentum, l.scale_updates, 1);
}
axpy_cpu(l.nweights, -decay*batch, l.weights, 1, l.weight_updates, 1);
axpy_cpu(l.nweights, learning_rate/batch, l.weight_updates, 1, l.weights, 1);
scal_cpu(l.nweights, momentum, l.weight_updates, 1);
}
image get_convolutional_weight(convolutional_layer l, int i)
{
int h = l.size;
int w = l.size;
int c = l.c/l.groups;
return float_to_image(w,h,c,l.weights+i*h*w*c);
}
void rgbgr_weights(convolutional_layer l)
{
int i;
for(i = 0; i < l.n; ++i){
image im = get_convolutional_weight(l, i);
if (im.c == 3) {
rgbgr_image(im);
}
}
}
void rescale_weights(convolutional_layer l, float scale, float trans)
{
int i;
for(i = 0; i < l.n; ++i){
image im = get_convolutional_weight(l, i);
if (im.c == 3) {
scale_image(im, scale);
float sum = sum_array(im.data, im.w*im.h*im.c);
l.biases[i] += sum*trans;
}
}
}
image *get_weights(convolutional_layer l)
{
image *weights = calloc(l.n, sizeof(image));
int i;
for(i = 0; i < l.n; ++i){
weights[i] = copy_image(get_convolutional_weight(l, i));
normalize_image(weights[i]);
/*
char buff[256];
sprintf(buff, "filter%d", i);
save_image(weights[i], buff);
*/
}
//error("hey");
return weights;
}
image *visualize_convolutional_layer(convolutional_layer l, char *window, image *prev_weights)
{
image *single_weights = get_weights(l);
show_images(single_weights, l.n, window);
image delta = get_convolutional_image(l);
image dc = collapse_image_layers(delta, 1);
char buff[256];
sprintf(buff, "%s: Output", window);
//show_image(dc, buff);
//save_image(dc, buff);
free_image(dc);
return single_weights;
}

View File

@ -0,0 +1,50 @@
#ifndef CONVOLUTIONAL_LAYER_H
#define CONVOLUTIONAL_LAYER_H
#include "cuda.h"
#include "image.h"
#include "activations.h"
#include "layer.h"
#include "network.h"
typedef layer convolutional_layer;
#ifdef GPU
void forward_convolutional_layer_gpu(convolutional_layer layer, network net);
void backward_convolutional_layer_gpu(convolutional_layer layer, network net);
void update_convolutional_layer_gpu(convolutional_layer layer, update_args a);
void push_convolutional_layer(convolutional_layer layer);
void pull_convolutional_layer(convolutional_layer layer);
void add_bias_gpu(float *output, float *biases, int batch, int n, int size);
void backward_bias_gpu(float *bias_updates, float *delta, int batch, int n, int size);
void adam_update_gpu(float *w, float *d, float *m, float *v, float B1, float B2, float eps, float decay, float rate, int n, int batch, int t);
#ifdef CUDNN
void cudnn_convolutional_setup(layer *l);
#endif
#endif
convolutional_layer make_convolutional_layer(int batch, int h, int w, int c, int n, int groups, int size, int stride, int padding, ACTIVATION activation, int batch_normalize, int binary, int xnor, int adam);
void resize_convolutional_layer(convolutional_layer *layer, int w, int h);
void forward_convolutional_layer(const convolutional_layer layer, network net);
void update_convolutional_layer(convolutional_layer layer, update_args a);
image *visualize_convolutional_layer(convolutional_layer layer, char *window, image *prev_weights);
void binarize_weights(float *weights, int n, int size, float *binary);
void swap_binary(convolutional_layer *l);
void binarize_weights2(float *weights, int n, int size, char *binary, float *scales);
void backward_convolutional_layer(convolutional_layer layer, network net);
void add_bias(float *output, float *biases, int batch, int n, int size);
void backward_bias(float *bias_updates, float *delta, int batch, int n, int size);
image get_convolutional_image(convolutional_layer layer);
image get_convolutional_delta(convolutional_layer layer);
image get_convolutional_weight(convolutional_layer layer, int i);
int convolutional_out_height(convolutional_layer layer);
int convolutional_out_width(convolutional_layer layer);
#endif

View File

@ -0,0 +1,176 @@
#include "cost_layer.h"
#include "utils.h"
#include "cuda.h"
#include "blas.h"
#include <math.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
COST_TYPE get_cost_type(char *s)
{
if (strcmp(s, "seg")==0) return SEG;
if (strcmp(s, "sse")==0) return SSE;
if (strcmp(s, "masked")==0) return MASKED;
if (strcmp(s, "smooth")==0) return SMOOTH;
if (strcmp(s, "L1")==0) return L1;
if (strcmp(s, "wgan")==0) return WGAN;
fprintf(stderr, "Couldn't find cost type %s, going with SSE\n", s);
return SSE;
}
char *get_cost_string(COST_TYPE a)
{
switch(a){
case SEG:
return "seg";
case SSE:
return "sse";
case MASKED:
return "masked";
case SMOOTH:
return "smooth";
case L1:
return "L1";
case WGAN:
return "wgan";
}
return "sse";
}
cost_layer make_cost_layer(int batch, int inputs, COST_TYPE cost_type, float scale)
{
fprintf(stderr, "cost %4d\n", inputs);
cost_layer l = {0};
l.type = COST;
l.scale = scale;
l.batch = batch;
l.inputs = inputs;
l.outputs = inputs;
l.cost_type = cost_type;
l.delta = calloc(inputs*batch, sizeof(float));
l.output = calloc(inputs*batch, sizeof(float));
l.cost = calloc(1, sizeof(float));
l.forward = forward_cost_layer;
l.backward = backward_cost_layer;
#ifdef GPU
l.forward_gpu = forward_cost_layer_gpu;
l.backward_gpu = backward_cost_layer_gpu;
l.delta_gpu = cuda_make_array(l.output, inputs*batch);
l.output_gpu = cuda_make_array(l.delta, inputs*batch);
#endif
return l;
}
void resize_cost_layer(cost_layer *l, int inputs)
{
l->inputs = inputs;
l->outputs = inputs;
l->delta = realloc(l->delta, inputs*l->batch*sizeof(float));
l->output = realloc(l->output, inputs*l->batch*sizeof(float));
#ifdef GPU
cuda_free(l->delta_gpu);
cuda_free(l->output_gpu);
l->delta_gpu = cuda_make_array(l->delta, inputs*l->batch);
l->output_gpu = cuda_make_array(l->output, inputs*l->batch);
#endif
}
void forward_cost_layer(cost_layer l, network net)
{
if (!net.truth) return;
if(l.cost_type == MASKED){
int i;
for(i = 0; i < l.batch*l.inputs; ++i){
if(net.truth[i] == SECRET_NUM) net.input[i] = SECRET_NUM;
}
}
if(l.cost_type == SMOOTH){
smooth_l1_cpu(l.batch*l.inputs, net.input, net.truth, l.delta, l.output);
}else if(l.cost_type == L1){
l1_cpu(l.batch*l.inputs, net.input, net.truth, l.delta, l.output);
} else {
l2_cpu(l.batch*l.inputs, net.input, net.truth, l.delta, l.output);
}
l.cost[0] = sum_array(l.output, l.batch*l.inputs);
}
void backward_cost_layer(const cost_layer l, network net)
{
axpy_cpu(l.batch*l.inputs, l.scale, l.delta, 1, net.delta, 1);
}
#ifdef GPU
void pull_cost_layer(cost_layer l)
{
cuda_pull_array(l.delta_gpu, l.delta, l.batch*l.inputs);
}
void push_cost_layer(cost_layer l)
{
cuda_push_array(l.delta_gpu, l.delta, l.batch*l.inputs);
}
int float_abs_compare (const void * a, const void * b)
{
float fa = *(const float*) a;
if(fa < 0) fa = -fa;
float fb = *(const float*) b;
if(fb < 0) fb = -fb;
return (fa > fb) - (fa < fb);
}
void forward_cost_layer_gpu(cost_layer l, network net)
{
if (!net.truth) return;
if(l.smooth){
scal_gpu(l.batch*l.inputs, (1-l.smooth), net.truth_gpu, 1);
add_gpu(l.batch*l.inputs, l.smooth * 1./l.inputs, net.truth_gpu, 1);
}
if(l.cost_type == SMOOTH){
smooth_l1_gpu(l.batch*l.inputs, net.input_gpu, net.truth_gpu, l.delta_gpu, l.output_gpu);
} else if (l.cost_type == L1){
l1_gpu(l.batch*l.inputs, net.input_gpu, net.truth_gpu, l.delta_gpu, l.output_gpu);
} else if (l.cost_type == WGAN){
wgan_gpu(l.batch*l.inputs, net.input_gpu, net.truth_gpu, l.delta_gpu, l.output_gpu);
} else {
l2_gpu(l.batch*l.inputs, net.input_gpu, net.truth_gpu, l.delta_gpu, l.output_gpu);
}
if (l.cost_type == SEG && l.noobject_scale != 1) {
scale_mask_gpu(l.batch*l.inputs, l.delta_gpu, 0, net.truth_gpu, l.noobject_scale);
scale_mask_gpu(l.batch*l.inputs, l.output_gpu, 0, net.truth_gpu, l.noobject_scale);
}
if (l.cost_type == MASKED) {
mask_gpu(l.batch*l.inputs, net.delta_gpu, SECRET_NUM, net.truth_gpu, 0);
}
if(l.ratio){
cuda_pull_array(l.delta_gpu, l.delta, l.batch*l.inputs);
qsort(l.delta, l.batch*l.inputs, sizeof(float), float_abs_compare);
int n = (1-l.ratio) * l.batch*l.inputs;
float thresh = l.delta[n];
thresh = 0;
printf("%f\n", thresh);
supp_gpu(l.batch*l.inputs, thresh, l.delta_gpu, 1);
}
if(l.thresh){
supp_gpu(l.batch*l.inputs, l.thresh*1./l.inputs, l.delta_gpu, 1);
}
cuda_pull_array(l.output_gpu, l.output, l.batch*l.inputs);
l.cost[0] = sum_array(l.output, l.batch*l.inputs);
}
void backward_cost_layer_gpu(const cost_layer l, network net)
{
axpy_gpu(l.batch*l.inputs, l.scale, l.delta_gpu, 1, net.delta_gpu, 1);
}
#endif

View File

@ -0,0 +1,20 @@
#ifndef COST_LAYER_H
#define COST_LAYER_H
#include "layer.h"
#include "network.h"
typedef layer cost_layer;
COST_TYPE get_cost_type(char *s);
char *get_cost_string(COST_TYPE a);
cost_layer make_cost_layer(int batch, int inputs, COST_TYPE type, float scale);
void forward_cost_layer(const cost_layer l, network net);
void backward_cost_layer(const cost_layer l, network net);
void resize_cost_layer(cost_layer *l, int inputs);
#ifdef GPU
void forward_cost_layer_gpu(cost_layer l, network net);
void backward_cost_layer_gpu(const cost_layer l, network net);
#endif
#endif

View File

@ -0,0 +1,283 @@
#include "crnn_layer.h"
#include "convolutional_layer.h"
#include "utils.h"
#include "cuda.h"
#include "blas.h"
#include "gemm.h"
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
static void increment_layer(layer *l, int steps)
{
int num = l->outputs*l->batch*steps;
l->output += num;
l->delta += num;
l->x += num;
l->x_norm += num;
#ifdef GPU
l->output_gpu += num;
l->delta_gpu += num;
l->x_gpu += num;
l->x_norm_gpu += num;
#endif
}
layer make_crnn_layer(int batch, int h, int w, int c, int hidden_filters, int output_filters, int steps, ACTIVATION activation, int batch_normalize)
{
fprintf(stderr, "CRNN Layer: %d x %d x %d image, %d filters\n", h,w,c,output_filters);
batch = batch / steps;
layer l = {0};
l.batch = batch;
l.type = CRNN;
l.steps = steps;
l.h = h;
l.w = w;
l.c = c;
l.out_h = h;
l.out_w = w;
l.out_c = output_filters;
l.inputs = h*w*c;
l.hidden = h * w * hidden_filters;
l.outputs = l.out_h * l.out_w * l.out_c;
l.state = calloc(l.hidden*batch*(steps+1), sizeof(float));
l.input_layer = malloc(sizeof(layer));
fprintf(stderr, "\t\t");
*(l.input_layer) = make_convolutional_layer(batch*steps, h, w, c, hidden_filters, 1, 3, 1, 1, activation, batch_normalize, 0, 0, 0);
l.input_layer->batch = batch;
l.self_layer = malloc(sizeof(layer));
fprintf(stderr, "\t\t");
*(l.self_layer) = make_convolutional_layer(batch*steps, h, w, hidden_filters, hidden_filters, 1, 3, 1, 1, activation, batch_normalize, 0, 0, 0);
l.self_layer->batch = batch;
l.output_layer = malloc(sizeof(layer));
fprintf(stderr, "\t\t");
*(l.output_layer) = make_convolutional_layer(batch*steps, h, w, hidden_filters, output_filters, 1, 3, 1, 1, activation, batch_normalize, 0, 0, 0);
l.output_layer->batch = batch;
l.output = l.output_layer->output;
l.delta = l.output_layer->delta;
l.forward = forward_crnn_layer;
l.backward = backward_crnn_layer;
l.update = update_crnn_layer;
#ifdef GPU
l.forward_gpu = forward_crnn_layer_gpu;
l.backward_gpu = backward_crnn_layer_gpu;
l.update_gpu = update_crnn_layer_gpu;
l.state_gpu = cuda_make_array(l.state, l.hidden*batch*(steps+1));
l.output_gpu = l.output_layer->output_gpu;
l.delta_gpu = l.output_layer->delta_gpu;
#endif
return l;
}
void update_crnn_layer(layer l, update_args a)
{
update_convolutional_layer(*(l.input_layer), a);
update_convolutional_layer(*(l.self_layer), a);
update_convolutional_layer(*(l.output_layer), a);
}
void forward_crnn_layer(layer l, network net)
{
network s = net;
s.train = net.train;
int i;
layer input_layer = *(l.input_layer);
layer self_layer = *(l.self_layer);
layer output_layer = *(l.output_layer);
fill_cpu(l.outputs * l.batch * l.steps, 0, output_layer.delta, 1);
fill_cpu(l.hidden * l.batch * l.steps, 0, self_layer.delta, 1);
fill_cpu(l.hidden * l.batch * l.steps, 0, input_layer.delta, 1);
if(net.train) fill_cpu(l.hidden * l.batch, 0, l.state, 1);
for (i = 0; i < l.steps; ++i) {
s.input = net.input;
forward_convolutional_layer(input_layer, s);
s.input = l.state;
forward_convolutional_layer(self_layer, s);
float *old_state = l.state;
if(net.train) l.state += l.hidden*l.batch;
if(l.shortcut){
copy_cpu(l.hidden * l.batch, old_state, 1, l.state, 1);
}else{
fill_cpu(l.hidden * l.batch, 0, l.state, 1);
}
axpy_cpu(l.hidden * l.batch, 1, input_layer.output, 1, l.state, 1);
axpy_cpu(l.hidden * l.batch, 1, self_layer.output, 1, l.state, 1);
s.input = l.state;
forward_convolutional_layer(output_layer, s);
net.input += l.inputs*l.batch;
increment_layer(&input_layer, 1);
increment_layer(&self_layer, 1);
increment_layer(&output_layer, 1);
}
}
void backward_crnn_layer(layer l, network net)
{
network s = net;
int i;
layer input_layer = *(l.input_layer);
layer self_layer = *(l.self_layer);
layer output_layer = *(l.output_layer);
increment_layer(&input_layer, l.steps-1);
increment_layer(&self_layer, l.steps-1);
increment_layer(&output_layer, l.steps-1);
l.state += l.hidden*l.batch*l.steps;
for (i = l.steps-1; i >= 0; --i) {
copy_cpu(l.hidden * l.batch, input_layer.output, 1, l.state, 1);
axpy_cpu(l.hidden * l.batch, 1, self_layer.output, 1, l.state, 1);
s.input = l.state;
s.delta = self_layer.delta;
backward_convolutional_layer(output_layer, s);
l.state -= l.hidden*l.batch;
/*
if(i > 0){
copy_cpu(l.hidden * l.batch, input_layer.output - l.hidden*l.batch, 1, l.state, 1);
axpy_cpu(l.hidden * l.batch, 1, self_layer.output - l.hidden*l.batch, 1, l.state, 1);
}else{
fill_cpu(l.hidden * l.batch, 0, l.state, 1);
}
*/
s.input = l.state;
s.delta = self_layer.delta - l.hidden*l.batch;
if (i == 0) s.delta = 0;
backward_convolutional_layer(self_layer, s);
copy_cpu(l.hidden*l.batch, self_layer.delta, 1, input_layer.delta, 1);
if (i > 0 && l.shortcut) axpy_cpu(l.hidden*l.batch, 1, self_layer.delta, 1, self_layer.delta - l.hidden*l.batch, 1);
s.input = net.input + i*l.inputs*l.batch;
if(net.delta) s.delta = net.delta + i*l.inputs*l.batch;
else s.delta = 0;
backward_convolutional_layer(input_layer, s);
increment_layer(&input_layer, -1);
increment_layer(&self_layer, -1);
increment_layer(&output_layer, -1);
}
}
#ifdef GPU
void pull_crnn_layer(layer l)
{
pull_convolutional_layer(*(l.input_layer));
pull_convolutional_layer(*(l.self_layer));
pull_convolutional_layer(*(l.output_layer));
}
void push_crnn_layer(layer l)
{
push_convolutional_layer(*(l.input_layer));
push_convolutional_layer(*(l.self_layer));
push_convolutional_layer(*(l.output_layer));
}
void update_crnn_layer_gpu(layer l, update_args a)
{
update_convolutional_layer_gpu(*(l.input_layer), a);
update_convolutional_layer_gpu(*(l.self_layer), a);
update_convolutional_layer_gpu(*(l.output_layer), a);
}
void forward_crnn_layer_gpu(layer l, network net)
{
network s = net;
int i;
layer input_layer = *(l.input_layer);
layer self_layer = *(l.self_layer);
layer output_layer = *(l.output_layer);
fill_gpu(l.outputs * l.batch * l.steps, 0, output_layer.delta_gpu, 1);
fill_gpu(l.hidden * l.batch * l.steps, 0, self_layer.delta_gpu, 1);
fill_gpu(l.hidden * l.batch * l.steps, 0, input_layer.delta_gpu, 1);
if(net.train) fill_gpu(l.hidden * l.batch, 0, l.state_gpu, 1);
for (i = 0; i < l.steps; ++i) {
s.input_gpu = net.input_gpu;
forward_convolutional_layer_gpu(input_layer, s);
s.input_gpu = l.state_gpu;
forward_convolutional_layer_gpu(self_layer, s);
float *old_state = l.state_gpu;
if(net.train) l.state_gpu += l.hidden*l.batch;
if(l.shortcut){
copy_gpu(l.hidden * l.batch, old_state, 1, l.state_gpu, 1);
}else{
fill_gpu(l.hidden * l.batch, 0, l.state_gpu, 1);
}
axpy_gpu(l.hidden * l.batch, 1, input_layer.output_gpu, 1, l.state_gpu, 1);
axpy_gpu(l.hidden * l.batch, 1, self_layer.output_gpu, 1, l.state_gpu, 1);
s.input_gpu = l.state_gpu;
forward_convolutional_layer_gpu(output_layer, s);
net.input_gpu += l.inputs*l.batch;
increment_layer(&input_layer, 1);
increment_layer(&self_layer, 1);
increment_layer(&output_layer, 1);
}
}
void backward_crnn_layer_gpu(layer l, network net)
{
network s = net;
s.train = net.train;
int i;
layer input_layer = *(l.input_layer);
layer self_layer = *(l.self_layer);
layer output_layer = *(l.output_layer);
increment_layer(&input_layer, l.steps - 1);
increment_layer(&self_layer, l.steps - 1);
increment_layer(&output_layer, l.steps - 1);
l.state_gpu += l.hidden*l.batch*l.steps;
for (i = l.steps-1; i >= 0; --i) {
copy_gpu(l.hidden * l.batch, input_layer.output_gpu, 1, l.state_gpu, 1);
axpy_gpu(l.hidden * l.batch, 1, self_layer.output_gpu, 1, l.state_gpu, 1);
s.input_gpu = l.state_gpu;
s.delta_gpu = self_layer.delta_gpu;
backward_convolutional_layer_gpu(output_layer, s);
l.state_gpu -= l.hidden*l.batch;
s.input_gpu = l.state_gpu;
s.delta_gpu = self_layer.delta_gpu - l.hidden*l.batch;
if (i == 0) s.delta_gpu = 0;
backward_convolutional_layer_gpu(self_layer, s);
copy_gpu(l.hidden*l.batch, self_layer.delta_gpu, 1, input_layer.delta_gpu, 1);
if (i > 0 && l.shortcut) axpy_gpu(l.hidden*l.batch, 1, self_layer.delta_gpu, 1, self_layer.delta_gpu - l.hidden*l.batch, 1);
s.input_gpu = net.input_gpu + i*l.inputs*l.batch;
if(net.delta_gpu) s.delta_gpu = net.delta_gpu + i*l.inputs*l.batch;
else s.delta_gpu = 0;
backward_convolutional_layer_gpu(input_layer, s);
increment_layer(&input_layer, -1);
increment_layer(&self_layer, -1);
increment_layer(&output_layer, -1);
}
}
#endif

View File

@ -0,0 +1,24 @@
#ifndef CRNN_LAYER_H
#define CRNN_LAYER_H
#include "activations.h"
#include "layer.h"
#include "network.h"
layer make_crnn_layer(int batch, int h, int w, int c, int hidden_filters, int output_filters, int steps, ACTIVATION activation, int batch_normalize);
void forward_crnn_layer(layer l, network net);
void backward_crnn_layer(layer l, network net);
void update_crnn_layer(layer l, update_args a);
#ifdef GPU
void forward_crnn_layer_gpu(layer l, network net);
void backward_crnn_layer_gpu(layer l, network net);
void update_crnn_layer_gpu(layer l, update_args a);
void push_crnn_layer(layer l);
void pull_crnn_layer(layer l);
#endif
#endif

View File

@ -0,0 +1,103 @@
#include "crop_layer.h"
#include "cuda.h"
#include <stdio.h>
image get_crop_image(crop_layer l)
{
int h = l.out_h;
int w = l.out_w;
int c = l.out_c;
return float_to_image(w,h,c,l.output);
}
void backward_crop_layer(const crop_layer l, network net){}
void backward_crop_layer_gpu(const crop_layer l, network net){}
crop_layer make_crop_layer(int batch, int h, int w, int c, int crop_height, int crop_width, int flip, float angle, float saturation, float exposure)
{
fprintf(stderr, "Crop Layer: %d x %d -> %d x %d x %d image\n", h,w,crop_height,crop_width,c);
crop_layer l = {0};
l.type = CROP;
l.batch = batch;
l.h = h;
l.w = w;
l.c = c;
l.scale = (float)crop_height / h;
l.flip = flip;
l.angle = angle;
l.saturation = saturation;
l.exposure = exposure;
l.out_w = crop_width;
l.out_h = crop_height;
l.out_c = c;
l.inputs = l.w * l.h * l.c;
l.outputs = l.out_w * l.out_h * l.out_c;
l.output = calloc(l.outputs*batch, sizeof(float));
l.forward = forward_crop_layer;
l.backward = backward_crop_layer;
#ifdef GPU
l.forward_gpu = forward_crop_layer_gpu;
l.backward_gpu = backward_crop_layer_gpu;
l.output_gpu = cuda_make_array(l.output, l.outputs*batch);
l.rand_gpu = cuda_make_array(0, l.batch*8);
#endif
return l;
}
void resize_crop_layer(layer *l, int w, int h)
{
l->w = w;
l->h = h;
l->out_w = l->scale*w;
l->out_h = l->scale*h;
l->inputs = l->w * l->h * l->c;
l->outputs = l->out_h * l->out_w * l->out_c;
l->output = realloc(l->output, l->batch*l->outputs*sizeof(float));
#ifdef GPU
cuda_free(l->output_gpu);
l->output_gpu = cuda_make_array(l->output, l->outputs*l->batch);
#endif
}
void forward_crop_layer(const crop_layer l, network net)
{
int i,j,c,b,row,col;
int index;
int count = 0;
int flip = (l.flip && rand()%2);
int dh = rand()%(l.h - l.out_h + 1);
int dw = rand()%(l.w - l.out_w + 1);
float scale = 2;
float trans = -1;
if(l.noadjust){
scale = 1;
trans = 0;
}
if(!net.train){
flip = 0;
dh = (l.h - l.out_h)/2;
dw = (l.w - l.out_w)/2;
}
for(b = 0; b < l.batch; ++b){
for(c = 0; c < l.c; ++c){
for(i = 0; i < l.out_h; ++i){
for(j = 0; j < l.out_w; ++j){
if(flip){
col = l.w - dw - j - 1;
}else{
col = j + dw;
}
row = i + dh;
index = col+l.w*(row+l.h*(c + l.c*b));
l.output[count++] = net.input[index]*scale + trans;
}
}
}
}
}

View File

@ -0,0 +1,20 @@
#ifndef CROP_LAYER_H
#define CROP_LAYER_H
#include "image.h"
#include "layer.h"
#include "network.h"
typedef layer crop_layer;
image get_crop_image(crop_layer l);
crop_layer make_crop_layer(int batch, int h, int w, int c, int crop_height, int crop_width, int flip, float angle, float saturation, float exposure);
void forward_crop_layer(const crop_layer l, network net);
void resize_crop_layer(layer *l, int w, int h);
#ifdef GPU
void forward_crop_layer_gpu(crop_layer l, network net);
#endif
#endif

View File

@ -0,0 +1,223 @@
#include "cuda_runtime.h"
#include "curand.h"
#include "cublas_v2.h"
#include "crop_layer.h"
#include "utils.h"
#include "cuda.h"
#include "image.h"
__device__ float get_pixel_kernel(float *image, int w, int h, int x, int y, int c)
{
if(x < 0 || x >= w || y < 0 || y >= h) return 0;
return image[x + w*(y + c*h)];
}
__device__ float3 rgb_to_hsv_kernel(float3 rgb)
{
float r = rgb.x;
float g = rgb.y;
float b = rgb.z;
float h, s, v;
float max = (r > g) ? ( (r > b) ? r : b) : ( (g > b) ? g : b);
float min = (r < g) ? ( (r < b) ? r : b) : ( (g < b) ? g : b);
float delta = max - min;
v = max;
if(max == 0){
s = 0;
h = -1;
}else{
s = delta/max;
if(r == max){
h = (g - b) / delta;
} else if (g == max) {
h = 2 + (b - r) / delta;
} else {
h = 4 + (r - g) / delta;
}
if (h < 0) h += 6;
}
return make_float3(h, s, v);
}
__device__ float3 hsv_to_rgb_kernel(float3 hsv)
{
float h = hsv.x;
float s = hsv.y;
float v = hsv.z;
float r, g, b;
float f, p, q, t;
if (s == 0) {
r = g = b = v;
} else {
int index = (int) floorf(h);
f = h - index;
p = v*(1-s);
q = v*(1-s*f);
t = v*(1-s*(1-f));
if(index == 0){
r = v; g = t; b = p;
} else if(index == 1){
r = q; g = v; b = p;
} else if(index == 2){
r = p; g = v; b = t;
} else if(index == 3){
r = p; g = q; b = v;
} else if(index == 4){
r = t; g = p; b = v;
} else {
r = v; g = p; b = q;
}
}
r = (r < 0) ? 0 : ((r > 1) ? 1 : r);
g = (g < 0) ? 0 : ((g > 1) ? 1 : g);
b = (b < 0) ? 0 : ((b > 1) ? 1 : b);
return make_float3(r, g, b);
}
__device__ float bilinear_interpolate_kernel(float *image, int w, int h, float x, float y, int c)
{
int ix = (int) floorf(x);
int iy = (int) floorf(y);
float dx = x - ix;
float dy = y - iy;
float val = (1-dy) * (1-dx) * get_pixel_kernel(image, w, h, ix, iy, c) +
dy * (1-dx) * get_pixel_kernel(image, w, h, ix, iy+1, c) +
(1-dy) * dx * get_pixel_kernel(image, w, h, ix+1, iy, c) +
dy * dx * get_pixel_kernel(image, w, h, ix+1, iy+1, c);
return val;
}
__global__ void levels_image_kernel(float *image, float *rand, int batch, int w, int h, int train, float saturation, float exposure, float translate, float scale, float shift)
{
int size = batch * w * h;
int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;
if(id >= size) return;
int x = id % w;
id /= w;
int y = id % h;
id /= h;
float rshift = rand[0];
float gshift = rand[1];
float bshift = rand[2];
float r0 = rand[8*id + 0];
float r1 = rand[8*id + 1];
float r2 = rand[8*id + 2];
float r3 = rand[8*id + 3];
saturation = r0*(saturation - 1) + 1;
saturation = (r1 > .5f) ? 1.f/saturation : saturation;
exposure = r2*(exposure - 1) + 1;
exposure = (r3 > .5f) ? 1.f/exposure : exposure;
size_t offset = id * h * w * 3;
image += offset;
float r = image[x + w*(y + h*0)];
float g = image[x + w*(y + h*1)];
float b = image[x + w*(y + h*2)];
float3 rgb = make_float3(r,g,b);
if(train){
float3 hsv = rgb_to_hsv_kernel(rgb);
hsv.y *= saturation;
hsv.z *= exposure;
rgb = hsv_to_rgb_kernel(hsv);
} else {
shift = 0;
}
image[x + w*(y + h*0)] = rgb.x*scale + translate + (rshift - .5f)*shift;
image[x + w*(y + h*1)] = rgb.y*scale + translate + (gshift - .5f)*shift;
image[x + w*(y + h*2)] = rgb.z*scale + translate + (bshift - .5f)*shift;
}
__global__ void forward_crop_layer_kernel(float *input, float *rand, int size, int c, int h, int w, int crop_height, int crop_width, int train, int flip, float angle, float *output)
{
int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;
if(id >= size) return;
float cx = w/2.f;
float cy = h/2.f;
int count = id;
int j = id % crop_width;
id /= crop_width;
int i = id % crop_height;
id /= crop_height;
int k = id % c;
id /= c;
int b = id;
float r4 = rand[8*b + 4];
float r5 = rand[8*b + 5];
float r6 = rand[8*b + 6];
float r7 = rand[8*b + 7];
float dw = (w - crop_width)*r4;
float dh = (h - crop_height)*r5;
flip = (flip && (r6 > .5f));
angle = 2*angle*r7 - angle;
if(!train){
dw = (w - crop_width)/2.f;
dh = (h - crop_height)/2.f;
flip = 0;
angle = 0;
}
input += w*h*c*b;
float x = (flip) ? w - dw - j - 1 : j + dw;
float y = i + dh;
float rx = cosf(angle)*(x-cx) - sinf(angle)*(y-cy) + cx;
float ry = sinf(angle)*(x-cx) + cosf(angle)*(y-cy) + cy;
output[count] = bilinear_interpolate_kernel(input, w, h, rx, ry, k);
}
void forward_crop_layer_gpu(crop_layer layer, network net)
{
cuda_random(layer.rand_gpu, layer.batch*8);
float radians = layer.angle*3.14159265f/180.f;
float scale = 2;
float translate = -1;
if(layer.noadjust){
scale = 1;
translate = 0;
}
int size = layer.batch * layer.w * layer.h;
levels_image_kernel<<<cuda_gridsize(size), BLOCK>>>(net.input_gpu, layer.rand_gpu, layer.batch, layer.w, layer.h, net.train, layer.saturation, layer.exposure, translate, scale, layer.shift);
check_error(cudaPeekAtLastError());
size = layer.batch*layer.c*layer.out_w*layer.out_h;
forward_crop_layer_kernel<<<cuda_gridsize(size), BLOCK>>>(net.input_gpu, layer.rand_gpu, size, layer.c, layer.h, layer.w, layer.out_h, layer.out_w, net.train, layer.flip, radians, layer.output_gpu);
check_error(cudaPeekAtLastError());
/*
cuda_pull_array(layer.output_gpu, layer.output, size);
image im = float_to_image(layer.crop_width, layer.crop_height, layer.c, layer.output + 0*(size/layer.batch));
image im2 = float_to_image(layer.crop_width, layer.crop_height, layer.c, layer.output + 1*(size/layer.batch));
image im3 = float_to_image(layer.crop_width, layer.crop_height, layer.c, layer.output + 2*(size/layer.batch));
translate_image(im, -translate);
scale_image(im, 1/scale);
translate_image(im2, -translate);
scale_image(im2, 1/scale);
translate_image(im3, -translate);
scale_image(im3, 1/scale);
show_image(im, "cropped");
show_image(im2, "cropped2");
show_image(im3, "cropped3");
cvWaitKey(0);
*/
}

View File

@ -0,0 +1,178 @@
int gpu_index = 0;
#ifdef GPU
#include "cuda.h"
#include "utils.h"
#include "blas.h"
#include <assert.h>
#include <stdlib.h>
#include <time.h>
void cuda_set_device(int n)
{
gpu_index = n;
cudaError_t status = cudaSetDevice(n);
check_error(status);
}
int cuda_get_device()
{
int n = 0;
cudaError_t status = cudaGetDevice(&n);
check_error(status);
return n;
}
void check_error(cudaError_t status)
{
//cudaDeviceSynchronize();
cudaError_t status2 = cudaGetLastError();
if (status != cudaSuccess)
{
const char *s = cudaGetErrorString(status);
char buffer[256];
printf("CUDA Error: %s\n", s);
assert(0);
snprintf(buffer, 256, "CUDA Error: %s", s);
error(buffer);
}
if (status2 != cudaSuccess)
{
const char *s = cudaGetErrorString(status);
char buffer[256];
printf("CUDA Error Prev: %s\n", s);
assert(0);
snprintf(buffer, 256, "CUDA Error Prev: %s", s);
error(buffer);
}
}
dim3 cuda_gridsize(size_t n){
size_t k = (n-1) / BLOCK + 1;
size_t x = k;
size_t y = 1;
if(x > 65535){
x = ceil(sqrt(k));
y = (n-1)/(x*BLOCK) + 1;
}
dim3 d = {x, y, 1};
//printf("%ld %ld %ld %ld\n", n, x, y, x*y*BLOCK);
return d;
}
#ifdef CUDNN
cudnnHandle_t cudnn_handle()
{
static int init[16] = {0};
static cudnnHandle_t handle[16];
int i = cuda_get_device();
if(!init[i]) {
cudnnCreate(&handle[i]);
init[i] = 1;
}
return handle[i];
}
#endif
cublasHandle_t blas_handle()
{
static int init[16] = {0};
static cublasHandle_t handle[16];
int i = cuda_get_device();
if(!init[i]) {
cublasCreate(&handle[i]);
init[i] = 1;
}
return handle[i];
}
float *cuda_make_array(float *x, size_t n)
{
float *x_gpu;
size_t size = sizeof(float)*n;
cudaError_t status = cudaMalloc((void **)&x_gpu, size);
check_error(status);
if(x){
status = cudaMemcpy(x_gpu, x, size, cudaMemcpyHostToDevice);
check_error(status);
} else {
fill_gpu(n, 0, x_gpu, 1);
}
if(!x_gpu) error("Cuda malloc failed\n");
return x_gpu;
}
void cuda_random(float *x_gpu, size_t n)
{
static curandGenerator_t gen[16];
static int init[16] = {0};
int i = cuda_get_device();
if(!init[i]){
curandCreateGenerator(&gen[i], CURAND_RNG_PSEUDO_DEFAULT);
curandSetPseudoRandomGeneratorSeed(gen[i], time(0));
init[i] = 1;
}
curandGenerateUniform(gen[i], x_gpu, n);
check_error(cudaPeekAtLastError());
}
float cuda_compare(float *x_gpu, float *x, size_t n, char *s)
{
float *tmp = calloc(n, sizeof(float));
cuda_pull_array(x_gpu, tmp, n);
//int i;
//for(i = 0; i < n; ++i) printf("%f %f\n", tmp[i], x[i]);
axpy_cpu(n, -1, x, 1, tmp, 1);
float err = dot_cpu(n, tmp, 1, tmp, 1);
printf("Error %s: %f\n", s, sqrt(err/n));
free(tmp);
return err;
}
int *cuda_make_int_array(int *x, size_t n)
{
int *x_gpu;
size_t size = sizeof(int)*n;
cudaError_t status = cudaMalloc((void **)&x_gpu, size);
check_error(status);
if(x){
status = cudaMemcpy(x_gpu, x, size, cudaMemcpyHostToDevice);
check_error(status);
}
if(!x_gpu) error("Cuda malloc failed\n");
return x_gpu;
}
void cuda_free(float *x_gpu)
{
cudaError_t status = cudaFree(x_gpu);
check_error(status);
}
void cuda_push_array(float *x_gpu, float *x, size_t n)
{
size_t size = sizeof(float)*n;
cudaError_t status = cudaMemcpy(x_gpu, x, size, cudaMemcpyHostToDevice);
check_error(status);
}
void cuda_pull_array(float *x_gpu, float *x, size_t n)
{
size_t size = sizeof(float)*n;
cudaError_t status = cudaMemcpy(x, x_gpu, size, cudaMemcpyDeviceToHost);
check_error(status);
}
float cuda_mag_array(float *x_gpu, size_t n)
{
float *temp = calloc(n, sizeof(float));
cuda_pull_array(x_gpu, temp, n);
float m = mag_array(temp, n);
free(temp);
return m;
}
#else
void cuda_set_device(int n){}
#endif

View File

@ -0,0 +1,20 @@
#ifndef CUDA_H
#define CUDA_H
#include "darknet.h"
#ifdef GPU
void check_error(cudaError_t status);
cublasHandle_t blas_handle();
int *cuda_make_int_array(int *x, size_t n);
void cuda_random(float *x_gpu, size_t n);
float cuda_compare(float *x_gpu, float *x, size_t n, char *s);
dim3 cuda_gridsize(size_t n);
#ifdef CUDNN
cudnnHandle_t cudnn_handle();
#endif
#endif
#endif

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,50 @@
#ifndef DATA_H
#define DATA_H
#include <pthread.h>
#include "darknet.h"
#include "matrix.h"
#include "list.h"
#include "image.h"
#include "tree.h"
static inline float distance_from_edge(int x, int max)
{
int dx = (max/2) - x;
if (dx < 0) dx = -dx;
dx = (max/2) + 1 - dx;
dx *= 2;
float dist = (float)dx/max;
if (dist > 1) dist = 1;
return dist;
}
void load_data_blocking(load_args args);
void print_letters(float *pred, int n);
data load_data_captcha(char **paths, int n, int m, int k, int w, int h);
data load_data_captcha_encode(char **paths, int n, int m, int w, int h);
data load_data_detection(int n, char **paths, int m, int w, int h, int boxes, int classes, float jitter, float hue, float saturation, float exposure);
data load_data_tag(char **paths, int n, int m, int k, int min, int max, int size, float angle, float aspect, float hue, float saturation, float exposure);
matrix load_image_augment_paths(char **paths, int n, int min, int max, int size, float angle, float aspect, float hue, float saturation, float exposure, int center);
data load_data_super(char **paths, int n, int m, int w, int h, int scale);
data load_data_augment(char **paths, int n, int m, char **labels, int k, tree *hierarchy, int min, int max, int size, float angle, float aspect, float hue, float saturation, float exposure, int center);
data load_data_regression(char **paths, int n, int m, int classes, int min, int max, int size, float angle, float aspect, float hue, float saturation, float exposure);
data load_go(char *filename);
data load_data_writing(char **paths, int n, int m, int w, int h, int out_w, int out_h);
void get_random_batch(data d, int n, float *X, float *y);
data get_data_part(data d, int part, int total);
data get_random_data(data d, int num);
data load_categorical_data_csv(char *filename, int target, int k);
void normalize_data_rows(data d);
void scale_data_rows(data d, float s);
void translate_data_rows(data d, float s);
void randomize_data(data d);
data *split_data(data d, int part, int total);
data concat_datas(data *d, int n);
void fill_truth(char *path, char **labels, int k, float *truth);
#endif

View File

@ -0,0 +1,137 @@
#include "cuda_runtime.h"
#include "curand.h"
#include "cublas_v2.h"
#include "convolutional_layer.h"
#include "deconvolutional_layer.h"
#include "batchnorm_layer.h"
#include "gemm.h"
#include "blas.h"
#include "im2col.h"
#include "col2im.h"
#include "utils.h"
#include "cuda.h"
void forward_deconvolutional_layer_gpu(layer l, network net)
{
int i;
int m = l.size*l.size*l.n;
int n = l.h*l.w;
int k = l.c;
fill_gpu(l.outputs*l.batch, 0, l.output_gpu, 1);
for(i = 0; i < l.batch; ++i){
float *a = l.weights_gpu;
float *b = net.input_gpu + i*l.c*l.h*l.w;
float *c = net.workspace;
gemm_gpu(1,0,m,n,k,1,a,m,b,n,0,c,n);
col2im_gpu(net.workspace, l.out_c, l.out_h, l.out_w, l.size, l.stride, l.pad, l.output_gpu+i*l.outputs);
}
if (l.batch_normalize) {
forward_batchnorm_layer_gpu(l, net);
} else {
add_bias_gpu(l.output_gpu, l.biases_gpu, l.batch, l.n, l.out_w*l.out_h);
}
activate_array_gpu(l.output_gpu, l.batch*l.n*l.out_w*l.out_h, l.activation);
}
void backward_deconvolutional_layer_gpu(layer l, network net)
{
int i;
//constrain_gpu(l.outputs*l.batch, 1, l.delta_gpu, 1);
gradient_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation, l.delta_gpu);
if(l.batch_normalize){
backward_batchnorm_layer_gpu(l, net);
} else {
backward_bias_gpu(l.bias_updates_gpu, l.delta_gpu, l.batch, l.n, l.out_w*l.out_h);
}
//if(net.delta_gpu) memset(net.delta_gpu, 0, l.batch*l.h*l.w*l.c*sizeof(float));
for(i = 0; i < l.batch; ++i){
int m = l.c;
int n = l.size*l.size*l.n;
int k = l.h*l.w;
float *a = net.input_gpu + i*m*k;
float *b = net.workspace;
float *c = l.weight_updates_gpu;
im2col_gpu(l.delta_gpu + i*l.outputs, l.out_c, l.out_h, l.out_w,
l.size, l.stride, l.pad, b);
gemm_gpu(0,1,m,n,k,1,a,k,b,k,1,c,n);
if(net.delta_gpu){
int m = l.c;
int n = l.h*l.w;
int k = l.size*l.size*l.n;
float *a = l.weights_gpu;
float *b = net.workspace;
float *c = net.delta_gpu + i*n*m;
gemm_gpu(0,0,m,n,k,1,a,k,b,n,1,c,n);
}
}
}
void pull_deconvolutional_layer(layer l)
{
cuda_pull_array(l.weights_gpu, l.weights, l.c*l.n*l.size*l.size);
cuda_pull_array(l.biases_gpu, l.biases, l.n);
cuda_pull_array(l.weight_updates_gpu, l.weight_updates, l.c*l.n*l.size*l.size);
cuda_pull_array(l.bias_updates_gpu, l.bias_updates, l.n);
if (l.batch_normalize){
cuda_pull_array(l.scales_gpu, l.scales, l.n);
cuda_pull_array(l.rolling_mean_gpu, l.rolling_mean, l.n);
cuda_pull_array(l.rolling_variance_gpu, l.rolling_variance, l.n);
}
}
void push_deconvolutional_layer(layer l)
{
cuda_push_array(l.weights_gpu, l.weights, l.c*l.n*l.size*l.size);
cuda_push_array(l.biases_gpu, l.biases, l.n);
cuda_push_array(l.weight_updates_gpu, l.weight_updates, l.c*l.n*l.size*l.size);
cuda_push_array(l.bias_updates_gpu, l.bias_updates, l.n);
if (l.batch_normalize){
cuda_push_array(l.scales_gpu, l.scales, l.n);
cuda_push_array(l.rolling_mean_gpu, l.rolling_mean, l.n);
cuda_push_array(l.rolling_variance_gpu, l.rolling_variance, l.n);
}
}
void update_deconvolutional_layer_gpu(layer l, update_args a)
{
float learning_rate = a.learning_rate*l.learning_rate_scale;
float momentum = a.momentum;
float decay = a.decay;
int batch = a.batch;
if(a.adam){
adam_update_gpu(l.weights_gpu, l.weight_updates_gpu, l.m_gpu, l.v_gpu, a.B1, a.B2, a.eps, decay, learning_rate, l.nweights, batch, a.t);
adam_update_gpu(l.biases_gpu, l.bias_updates_gpu, l.bias_m_gpu, l.bias_v_gpu, a.B1, a.B2, a.eps, decay, learning_rate, l.n, batch, a.t);
if(l.scales_gpu){
adam_update_gpu(l.scales_gpu, l.scale_updates_gpu, l.scale_m_gpu, l.scale_v_gpu, a.B1, a.B2, a.eps, decay, learning_rate, l.n, batch, a.t);
}
}else{
axpy_gpu(l.nweights, -decay*batch, l.weights_gpu, 1, l.weight_updates_gpu, 1);
axpy_gpu(l.nweights, learning_rate/batch, l.weight_updates_gpu, 1, l.weights_gpu, 1);
scal_gpu(l.nweights, momentum, l.weight_updates_gpu, 1);
axpy_gpu(l.n, learning_rate/batch, l.bias_updates_gpu, 1, l.biases_gpu, 1);
scal_gpu(l.n, momentum, l.bias_updates_gpu, 1);
if(l.scales_gpu){
axpy_gpu(l.n, learning_rate/batch, l.scale_updates_gpu, 1, l.scales_gpu, 1);
scal_gpu(l.n, momentum, l.scale_updates_gpu, 1);
}
}
}

Some files were not shown because too many files have changed in this diff Show More