Well, you have put a lot of blood and sweat into writing your latest blog post on Machine Learning. Don’t let your struggle go in vain and let the world know about it. Sharing your blog posts across different channels not only gives you exposure but also may get you tremendous feedback on your work. In my personal experience, the feedback has been super useful for me to improve myself not only as a writer but also as a communicator. There can be times you might have missed out on a super important detail, or you might have unknowingly introduced a snazzy bug in the code listings of your blog – those things could have been caught in the process of feedback interchange.
In this short article, I am going to enlist a few different ways to share your work and get feedback. Note your work can be anything starting from a crucial GitHub PR, to a weekend project. Although the following platforms and communities are mostly limited to Machine Learning, I hope this guide will be useful for tech bloggers in general.
Sharing to aid discussions
You might be active on online forums like Quora, StackOverflow, and so on. While participating in a discussion in those forums you can make effective use of your work if it is relevant. In these cases, the approach is to not just supply a link to your work, but also to first write about any important pointers relevant to the discussion first, and then supply the link to your work to better aid it. Let’s say there’s a discussion going on the topic of “What is Weight Initialization in Neural Nets?” Here’s how I would approach my comment:
A neural net can be viewed as a function with learnable parameters and those parameters are often referred to as weights and biases. Now, while starting the training of neural nets these parameters (typically the weights) are initialized in a number of different ways - sometimes, using constant values like 0’s and 1’s, sometimes with values sampled from some distribution (typically a uniform distribution or normal distribution), sometimes with other sophisticated schemes like Xavier Initialization. The performance of a neural net depends a lot on how its parameters are initialized when it is starting to train. Moreover, if we initialize it randomly for each run, it’s bound to be non-reproducible (almost) and even not-so-performant too. On the other hand, if we initialize it with constant values, it might take way too long to converge. With that, we also eliminate the beauty of randomness which in turn gives a neural net the power to reach covergence quicker using gradient-based learning. We clearly need a better way to initialize it. Careful initialization of weights helps us to train them better. To know more, please follow this article of mine.
Well, that’s it for now. I hope it proves to be useful for you. Please provide any suggestions you may have via the comments. I am thankful to Alessio of FloydHub for sharing these tips with me.