Browse Source

Refine the readme structure.

tags/v0.100.5-BERT-load
Yaohui Liu Haiping 2 years ago
parent
commit
69889a3df1
2 changed files with 122 additions and 94 deletions
  1. +67
    -94
      README.md
  2. +55
    -0
      docs/Example-fsharp.md

+ 67
- 94
README.md View File

@@ -11,12 +11,12 @@


English | [中文](docs/Readme-CN.md) English | [中文](docs/Readme-CN.md)


*master branch is based on tensorflow v2.x, v0.6x branch is based on tensorflow v2.6, v0.15-tensorflow1.15 is from tensorflow1.15.*
*master branch is corresponding to tensorflow v2.10, v0.6x branch is from tensorflow v2.6, v0.15-tensorflow1.15 is from tensorflow1.15.*




![tensors_flowing](docs/assets/tensors_flowing.gif) ![tensors_flowing](docs/assets/tensors_flowing.gif)


### Why TensorFlow.NET ?
## Why TensorFlow.NET ?


`SciSharp STACK`'s mission is to bring popular data science technology into the .NET world and to provide .NET developers with a powerful Machine Learning tool set without reinventing the wheel. Since the APIs are kept as similar as possible you can immediately adapt any existing TensorFlow code in C# or F# with a zero learning curve. Take a look at a comparison picture and see how comfortably a TensorFlow/Python script translates into a C# program with TensorFlow.NET. `SciSharp STACK`'s mission is to bring popular data science technology into the .NET world and to provide .NET developers with a powerful Machine Learning tool set without reinventing the wheel. Since the APIs are kept as similar as possible you can immediately adapt any existing TensorFlow code in C# or F# with a zero learning curve. Take a look at a comparison picture and see how comfortably a TensorFlow/Python script translates into a C# program with TensorFlow.NET.


@@ -28,58 +28,56 @@ In comparison to other projects, like for instance [TensorFlowSharp](https://www


[ML.NET](https://github.com/dotnet/machinelearning) also support using tensorflow as backend to train and infer your model, which provides better integration with .NET. [ML.NET](https://github.com/dotnet/machinelearning) also support using tensorflow as backend to train and infer your model, which provides better integration with .NET.


Go through the online docs [TensorFlow for .NET](https://scisharp.github.io/tensorflow-net-docs) before you get started with Machine Learning in .NET.
## Documention


### How to use
Introduction and simple examples:[Tensorflow.NET Documents](https://scisharp.github.io/tensorflow-net-docs)


| TensorFlow | tf native1.14, cuda 10.0 | tf native 1.15, cuda 10.0 | tf native 2.3, cuda 10.1 | tf native 2.4, cuda 11 |
| -------------------------- | ------------- | -------------- | ------------- | ------------- |
| tf.net 0.4x, tf.keras 0.5 | | | | x |
| tf.net 0.3x, tf.keras 0.4 | | | x | |
| tf.net 0.2x | | x | x | |
| tf.net 0.15 | x | x | | |
| tf.net 0.14 | x | | | |
Detailed documention:[The Definitive Guide to Tensorflow.NET](https://tensorflownet.readthedocs.io/en/latest/FrontCover.html)


Troubleshooting of running example or installation, please refer [here](tensorflowlib/README.md).
Examples:[TensorFlow.NET Examples](https://github.com/SciSharp/TensorFlow.NET-Examples)


There are many examples reside at [TensorFlow.NET Examples](https://github.com/SciSharp/TensorFlow.NET-Examples) written in C# and F#.
Troubleshooting of running example or installation:[Tensorflow.NET FAQ](tensorflowlib/README.md)


#### TensorFlow.net Version
` tf.net 0.4x -> tf native 2.4`
`tf.net 0.6x -> tf native 2.6`
`tf.net 0.7x -> tf native 2.7`
`tf.net 0.10x -> tf native 2.10`
`...`
## Usage


#### C# Example
### Installation

You can search the package name in NuGet Manager, or use the commands below in pckage manager console.

The installation contains two parts, the first is the main body:


Install TF.NET and TensorFlow binary through NuGet.
```sh ```sh
### install tensorflow C#/F# binding
### Install Tensorflow.NET
PM> Install-Package TensorFlow.NET PM> Install-Package TensorFlow.NET
### install keras for tensorflow

### Install Tensorflow.Keras
PM> Install-Package TensorFlow.Keras PM> Install-Package TensorFlow.Keras
```

The second part is the computing support part. Only one of the following packages is needed, depending on your device and system.


### Install tensorflow binary
### For CPU version
```
### Cpu version for Windows, Linux and Mac
PM> Install-Package SciSharp.TensorFlow.Redist PM> Install-Package SciSharp.TensorFlow.Redist


### For GPU version (CUDA and cuDNN are required)
### Gpu version for Windows (CUDA and CUDNN are required)
PM> Install-Package SciSharp.TensorFlow.Redist-Windows-GPU PM> Install-Package SciSharp.TensorFlow.Redist-Windows-GPU

### Gpu version for Linux (CUDA and CUDNN are required)
PM> Install-Package SciSharp.TensorFlow.Redist-Linux-GPU
``` ```


Import TF.NET and Keras API in your project.

Two simple examples are given here to introduce the basic usage of Tensorflow.NET. As you can see, it's easy to write C# code just like that in Python.

### Example - Linear Regression in `Eager` mode


```csharp ```csharp
using static Tensorflow.Binding; using static Tensorflow.Binding;
using static Tensorflow.KerasApi; using static Tensorflow.KerasApi;
using Tensorflow; using Tensorflow;
using Tensorflow.NumPy; using Tensorflow.NumPy;
```

Linear Regression in `Eager` mode:


```csharp
// Parameters // Parameters
var training_steps = 1000; var training_steps = 1000;
var learning_rate = 0.01f; var learning_rate = 0.01f;
@@ -125,9 +123,14 @@ foreach (var step in range(1, training_steps + 1))


Run this example in [Jupyter Notebook](https://github.com/SciSharp/SciSharpCube). Run this example in [Jupyter Notebook](https://github.com/SciSharp/SciSharpCube).


Toy version of `ResNet` in `Keras` functional API:
### Example - Toy version of `ResNet` in `Keras` functional API


```csharp ```csharp
using static Tensorflow.Binding;
using static Tensorflow.KerasApi;
using Tensorflow;
using Tensorflow.NumPy;

var layers = new LayersApi(); var layers = new LayersApi();
// input layer // input layer
var inputs = keras.Input(shape: (32, 32, 3), name: "img"); var inputs = keras.Input(shape: (32, 32, 3), name: "img");
@@ -165,83 +168,52 @@ model.fit(x_train[new Slice(0, 2000)], y_train[new Slice(0, 2000)],
validation_split: 0.2f); validation_split: 0.2f);
``` ```


#### F# Example
The F# example for linear regression is available [here](docs/Example-fsharp.md).


Linear Regression in `Eager` mode:
More adcanced examples could be found in [TensorFlow.NET Examples](https://github.com/SciSharp/TensorFlow.NET-Examples).


```fsharp
#r "nuget: TensorFlow.Net"
#r "nuget: TensorFlow.Keras"
#r "nuget: SciSharp.TensorFlow.Redist"
## Version Relationships


open Tensorflow
open Tensorflow.NumPy
open type Tensorflow.Binding
open type Tensorflow.KerasApi
| TensorFlow.NET Versions | tensorflow 1.14, cuda 10.0 | tensorflow 1.15, cuda 10.0 | tensorflow 2.3, cuda 10.1 | tensorflow 2.4, cuda 11 | tensorflow 2.7, cuda 11 |tensorflow 2.10, cuda 11 |
| -------------------------- | ------------- | -------------- | ------------- | ------------- | ------------ | ------------ |
| tf.net 0.10x, tf.keras 0.10 | | | | | | x |
| tf.net 0.7x, tf.keras 0.7 | | | | | x | |
| tf.net 0.4x, tf.keras 0.5 | | | | x | | |
| tf.net 0.3x, tf.keras 0.4 | | | x | | | |
| tf.net 0.2x | | x | x | | | |
| tf.net 0.15 | x | x | | | | |
| tf.net 0.14 | x | | | | | |


let tf = New<tensorflow>()
tf.enable_eager_execution()


// Parameters
let training_steps = 1000
let learning_rate = 0.01f
let display_step = 100

// Sample data
let train_X =
np.array(3.3f, 4.4f, 5.5f, 6.71f, 6.93f, 4.168f, 9.779f, 6.182f, 7.59f, 2.167f,
7.042f, 10.791f, 5.313f, 7.997f, 5.654f, 9.27f, 3.1f)
let train_Y =
np.array(1.7f, 2.76f, 2.09f, 3.19f, 1.694f, 1.573f, 3.366f, 2.596f, 2.53f, 1.221f,
2.827f, 3.465f, 1.65f, 2.904f, 2.42f, 2.94f, 1.3f)
let n_samples = train_X.shape.[0]

// We can set a fixed init value in order to demo
let W = tf.Variable(-0.06f,name = "weight")
let b = tf.Variable(-0.73f, name = "bias")
let optimizer = keras.optimizers.SGD(learning_rate)

// Run training for the given number of steps.
for step = 1 to (training_steps + 1) do
// Run the optimization to update W and b values.
// Wrap computation inside a GradientTape for automatic differentiation.
use g = tf.GradientTape()
// Linear regression (Wx + b).
let pred = W * train_X + b
// Mean square error.
let loss = tf.reduce_sum(tf.pow(pred - train_Y,2)) / (2 * n_samples)
// should stop recording
// compute gradients
let gradients = g.gradient(loss,struct (W,b))

// Update W and b following gradients.
optimizer.apply_gradients(zip(gradients, struct (W,b)))

if (step % display_step) = 0 then
let pred = W * train_X + b
let loss = tf.reduce_sum(tf.pow(pred-train_Y,2)) / (2 * n_samples)
printfn $"step: {step}, loss: {loss.numpy()}, W: {W.numpy()}, b: {b.numpy()}"
```
tf.net 0.4x -> tf native 2.4
tf.net 0.6x -> tf native 2.6
tf.net 0.7x -> tf native 2.7
tf.net 0.10x -> tf native 2.10
...
``` ```


Read the book [The Definitive Guide to Tensorflow.NET](https://tensorflownet.readthedocs.io/en/latest/FrontCover.html) if you want to know more about TensorFlow for .NET under the hood.
## Contribution:


### Contribute:
Feel like contributing to one of the hottest projects in the Machine Learning field? Want to know how Tensorflow magically creates the computational graph?


Feel like contributing to one of the hottest projects in the Machine Learning field? Want to know how Tensorflow magically creates the computational graph? We appreciate every contribution however small. There are tasks for novices to experts alike, if everyone tackles only a small task the sum of contributions will be huge.
We appreciate every contribution however small! There are tasks for novices to experts alike, if everyone tackles only a small task the sum of contributions will be huge.


You can: You can:
* Let everyone know about this project
* Port Tensorflow unit tests from Python to C# or F#
* Port missing Tensorflow code from Python to C# or F#
* Port Tensorflow examples to C# or F# and raise issues if you come accross missing parts of the API
* Debug one of the unit tests that is marked as Ignored to get it to work
* Debug one of the not yet working examples and get it to work
- Star Tensorflow.NET or share it with others
- Tell us about the missing APIs compared to Tensorflow
- Port Tensorflow unit tests from Python to C# or F#
- Port Tensorflow examples to C# or F# and raise issues if you come accross missing parts of the API or BUG
- Debug one of the unit tests that is marked as Ignored to get it to work
- Debug one of the not yet working examples and get it to work
- Help us to complete the documentions.


### How to debug unit tests:

#### How to debug unit tests:


The best way to find out why a unit test is failing is to single step it in C# or F# and its corresponding Python at the same time to see where the flow of execution digresses or where variables exhibit different values. Good Python IDEs like PyCharm let you single step into the tensorflow library code. The best way to find out why a unit test is failing is to single step it in C# or F# and its corresponding Python at the same time to see where the flow of execution digresses or where variables exhibit different values. Good Python IDEs like PyCharm let you single step into the tensorflow library code.


### Git Knowhow for Contributors
#### Git Knowhow for Contributors


Add SciSharp/TensorFlow.NET as upstream to your local repo ... Add SciSharp/TensorFlow.NET as upstream to your local repo ...
```git ```git
@@ -252,6 +224,7 @@ Please make sure you keep your fork up to date by regularly pulling from upstrea
```git ```git
git pull upstream master git pull upstream master
``` ```

### Support ### Support
Buy our book to make open source project be sustainable [TensorFlow.NET实战](https://item.jd.com/13441549.html) Buy our book to make open source project be sustainable [TensorFlow.NET实战](https://item.jd.com/13441549.html)
<p float="left"> <p float="left">


+ 55
- 0
docs/Example-fsharp.md View File

@@ -0,0 +1,55 @@
Linear Regression in `Eager` mode:

```fsharp
#r "nuget: TensorFlow.Net"
#r "nuget: TensorFlow.Keras"
#r "nuget: SciSharp.TensorFlow.Redist"

open Tensorflow
open Tensorflow.NumPy
open type Tensorflow.Binding
open type Tensorflow.KerasApi

let tf = New<tensorflow>()
tf.enable_eager_execution()

// Parameters
let training_steps = 1000
let learning_rate = 0.01f
let display_step = 100

// Sample data
let train_X =
np.array(3.3f, 4.4f, 5.5f, 6.71f, 6.93f, 4.168f, 9.779f, 6.182f, 7.59f, 2.167f,
7.042f, 10.791f, 5.313f, 7.997f, 5.654f, 9.27f, 3.1f)
let train_Y =
np.array(1.7f, 2.76f, 2.09f, 3.19f, 1.694f, 1.573f, 3.366f, 2.596f, 2.53f, 1.221f,
2.827f, 3.465f, 1.65f, 2.904f, 2.42f, 2.94f, 1.3f)
let n_samples = train_X.shape.[0]

// We can set a fixed init value in order to demo
let W = tf.Variable(-0.06f,name = "weight")
let b = tf.Variable(-0.73f, name = "bias")
let optimizer = keras.optimizers.SGD(learning_rate)

// Run training for the given number of steps.
for step = 1 to (training_steps + 1) do
// Run the optimization to update W and b values.
// Wrap computation inside a GradientTape for automatic differentiation.
use g = tf.GradientTape()
// Linear regression (Wx + b).
let pred = W * train_X + b
// Mean square error.
let loss = tf.reduce_sum(tf.pow(pred - train_Y,2)) / (2 * n_samples)
// should stop recording
// compute gradients
let gradients = g.gradient(loss,struct (W,b))

// Update W and b following gradients.
optimizer.apply_gradients(zip(gradients, struct (W,b)))

if (step % display_step) = 0 then
let pred = W * train_X + b
let loss = tf.reduce_sum(tf.pow(pred-train_Y,2)) / (2 * n_samples)
printfn $"step: {step}, loss: {loss.numpy()}, W: {W.numpy()}, b: {b.numpy()}"
```

Loading…
Cancel
Save